2023-07-23 22:10:13,046 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65 2023-07-23 22:10:13,068 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-23 22:10:13,091 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 22:10:13,092 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9, deleteOnExit=true 2023-07-23 22:10:13,092 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 22:10:13,093 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/test.cache.data in system properties and HBase conf 2023-07-23 22:10:13,093 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 22:10:13,094 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir in system properties and HBase conf 2023-07-23 22:10:13,094 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 22:10:13,095 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 22:10:13,095 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 22:10:13,231 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-23 22:10:13,611 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 22:10:13,616 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 22:10:13,616 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 22:10:13,616 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 22:10:13,617 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 22:10:13,617 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 22:10:13,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 22:10:13,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 22:10:13,618 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 22:10:13,619 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 22:10:13,619 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/nfs.dump.dir in system properties and HBase conf 2023-07-23 22:10:13,619 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir in system properties and HBase conf 2023-07-23 22:10:13,620 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 22:10:13,620 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 22:10:13,620 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 22:10:14,158 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 22:10:14,163 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 22:10:14,457 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-23 22:10:14,633 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-23 22:10:14,652 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:14,696 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:14,735 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir/Jetty_localhost_42733_hdfs____phrqyr/webapp 2023-07-23 22:10:14,910 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42733 2023-07-23 22:10:14,983 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 22:10:14,984 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 22:10:15,440 WARN [Listener at localhost/36271] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:15,538 WARN [Listener at localhost/36271] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 22:10:15,557 WARN [Listener at localhost/36271] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:15,565 INFO [Listener at localhost/36271] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:15,570 INFO [Listener at localhost/36271] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir/Jetty_localhost_45557_datanode____jt8swt/webapp 2023-07-23 22:10:15,696 INFO [Listener at localhost/36271] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45557 2023-07-23 22:10:16,153 WARN [Listener at localhost/33803] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:16,200 WARN [Listener at localhost/33803] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 22:10:16,206 WARN [Listener at localhost/33803] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:16,210 INFO [Listener at localhost/33803] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:16,220 INFO [Listener at localhost/33803] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir/Jetty_localhost_34931_datanode____9dkz7h/webapp 2023-07-23 22:10:16,360 INFO [Listener at localhost/33803] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34931 2023-07-23 22:10:16,402 WARN [Listener at localhost/39851] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:16,481 WARN [Listener at localhost/39851] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 22:10:16,486 WARN [Listener at localhost/39851] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:16,489 INFO [Listener at localhost/39851] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:16,503 INFO [Listener at localhost/39851] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir/Jetty_localhost_37349_datanode____d85bv/webapp 2023-07-23 22:10:16,652 INFO [Listener at localhost/39851] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37349 2023-07-23 22:10:16,681 WARN [Listener at localhost/42675] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:16,754 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x44e504eec44316fd: Processing first storage report for DS-24aad250-36de-4713-bdf0-09f9b911a9f6 from datanode e1821169-9edd-4e9d-bee0-ae4582003e75 2023-07-23 22:10:16,755 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x44e504eec44316fd: from storage DS-24aad250-36de-4713-bdf0-09f9b911a9f6 node DatanodeRegistration(127.0.0.1:38869, datanodeUuid=e1821169-9edd-4e9d-bee0-ae4582003e75, infoPort=38687, infoSecurePort=0, ipcPort=39851, storageInfo=lv=-57;cid=testClusterID;nsid=1685946745;c=1690150214233), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 22:10:16,755 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x30ce2bca123c595b: Processing first storage report for DS-31d3704b-bd83-48de-891f-b615b01573de from datanode 14886a0f-ba7a-4d66-a809-40a77b2db62c 2023-07-23 22:10:16,755 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x30ce2bca123c595b: from storage DS-31d3704b-bd83-48de-891f-b615b01573de node DatanodeRegistration(127.0.0.1:46449, datanodeUuid=14886a0f-ba7a-4d66-a809-40a77b2db62c, infoPort=33563, infoSecurePort=0, ipcPort=33803, storageInfo=lv=-57;cid=testClusterID;nsid=1685946745;c=1690150214233), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 22:10:16,756 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x44e504eec44316fd: Processing first storage report for DS-87c8fb44-7aed-4c95-b7d4-b598966bff9d from datanode e1821169-9edd-4e9d-bee0-ae4582003e75 2023-07-23 22:10:16,756 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x44e504eec44316fd: from storage DS-87c8fb44-7aed-4c95-b7d4-b598966bff9d node DatanodeRegistration(127.0.0.1:38869, datanodeUuid=e1821169-9edd-4e9d-bee0-ae4582003e75, infoPort=38687, infoSecurePort=0, ipcPort=39851, storageInfo=lv=-57;cid=testClusterID;nsid=1685946745;c=1690150214233), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:16,756 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x30ce2bca123c595b: Processing first storage report for DS-6f576111-890c-4e02-a891-22c87ccf642c from datanode 14886a0f-ba7a-4d66-a809-40a77b2db62c 2023-07-23 22:10:16,756 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x30ce2bca123c595b: from storage DS-6f576111-890c-4e02-a891-22c87ccf642c node DatanodeRegistration(127.0.0.1:46449, datanodeUuid=14886a0f-ba7a-4d66-a809-40a77b2db62c, infoPort=33563, infoSecurePort=0, ipcPort=33803, storageInfo=lv=-57;cid=testClusterID;nsid=1685946745;c=1690150214233), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:16,794 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa731a0dc0150484e: Processing first storage report for DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc from datanode 815b915b-18a9-409a-9813-837d7f9c5956 2023-07-23 22:10:16,794 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa731a0dc0150484e: from storage DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc node DatanodeRegistration(127.0.0.1:35551, datanodeUuid=815b915b-18a9-409a-9813-837d7f9c5956, infoPort=35455, infoSecurePort=0, ipcPort=42675, storageInfo=lv=-57;cid=testClusterID;nsid=1685946745;c=1690150214233), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:16,795 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa731a0dc0150484e: Processing first storage report for DS-4ca7b4b0-1860-4129-9be2-833c6d53a7f4 from datanode 815b915b-18a9-409a-9813-837d7f9c5956 2023-07-23 22:10:16,795 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa731a0dc0150484e: from storage DS-4ca7b4b0-1860-4129-9be2-833c6d53a7f4 node DatanodeRegistration(127.0.0.1:35551, datanodeUuid=815b915b-18a9-409a-9813-837d7f9c5956, infoPort=35455, infoSecurePort=0, ipcPort=42675, storageInfo=lv=-57;cid=testClusterID;nsid=1685946745;c=1690150214233), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:17,073 DEBUG [Listener at localhost/42675] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65 2023-07-23 22:10:17,201 INFO [Listener at localhost/42675] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/zookeeper_0, clientPort=52385, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 22:10:17,228 INFO [Listener at localhost/42675] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=52385 2023-07-23 22:10:17,240 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:17,242 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:17,949 INFO [Listener at localhost/42675] util.FSUtils(471): Created version file at hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9 with version=8 2023-07-23 22:10:17,949 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/hbase-staging 2023-07-23 22:10:17,957 DEBUG [Listener at localhost/42675] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 22:10:17,957 DEBUG [Listener at localhost/42675] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 22:10:17,957 DEBUG [Listener at localhost/42675] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 22:10:17,957 DEBUG [Listener at localhost/42675] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 22:10:18,287 INFO [Listener at localhost/42675] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-23 22:10:18,820 INFO [Listener at localhost/42675] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:18,867 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:18,868 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:18,868 INFO [Listener at localhost/42675] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:18,869 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:18,869 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:19,041 INFO [Listener at localhost/42675] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:19,131 DEBUG [Listener at localhost/42675] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-23 22:10:19,236 INFO [Listener at localhost/42675] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37045 2023-07-23 22:10:19,247 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:19,249 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:19,271 INFO [Listener at localhost/42675] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37045 connecting to ZooKeeper ensemble=127.0.0.1:52385 2023-07-23 22:10:19,313 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:370450x0, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:19,316 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37045-0x101943c28b20000 connected 2023-07-23 22:10:19,340 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:19,341 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:19,344 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:19,353 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37045 2023-07-23 22:10:19,353 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37045 2023-07-23 22:10:19,353 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37045 2023-07-23 22:10:19,354 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37045 2023-07-23 22:10:19,354 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37045 2023-07-23 22:10:19,385 INFO [Listener at localhost/42675] log.Log(170): Logging initialized @7114ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-23 22:10:19,519 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:19,519 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:19,520 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:19,522 INFO [Listener at localhost/42675] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 22:10:19,522 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:19,523 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:19,526 INFO [Listener at localhost/42675] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:19,595 INFO [Listener at localhost/42675] http.HttpServer(1146): Jetty bound to port 38931 2023-07-23 22:10:19,597 INFO [Listener at localhost/42675] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:19,644 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:19,648 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1e48a43a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:19,649 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:19,650 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2269cb1d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:19,876 INFO [Listener at localhost/42675] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:19,890 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:19,891 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:19,893 INFO [Listener at localhost/42675] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 22:10:19,900 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:19,925 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@25becdec{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir/jetty-0_0_0_0-38931-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5214066075344976879/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 22:10:19,937 INFO [Listener at localhost/42675] server.AbstractConnector(333): Started ServerConnector@6c6f2d1b{HTTP/1.1, (http/1.1)}{0.0.0.0:38931} 2023-07-23 22:10:19,937 INFO [Listener at localhost/42675] server.Server(415): Started @7665ms 2023-07-23 22:10:19,940 INFO [Listener at localhost/42675] master.HMaster(444): hbase.rootdir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9, hbase.cluster.distributed=false 2023-07-23 22:10:20,017 INFO [Listener at localhost/42675] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:20,017 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:20,017 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:20,018 INFO [Listener at localhost/42675] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:20,018 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:20,018 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:20,023 INFO [Listener at localhost/42675] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:20,027 INFO [Listener at localhost/42675] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46085 2023-07-23 22:10:20,029 INFO [Listener at localhost/42675] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:20,037 DEBUG [Listener at localhost/42675] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:20,038 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:20,040 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:20,042 INFO [Listener at localhost/42675] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46085 connecting to ZooKeeper ensemble=127.0.0.1:52385 2023-07-23 22:10:20,048 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:460850x0, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:20,049 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46085-0x101943c28b20001 connected 2023-07-23 22:10:20,050 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:20,051 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:20,052 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:20,053 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46085 2023-07-23 22:10:20,053 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46085 2023-07-23 22:10:20,053 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46085 2023-07-23 22:10:20,054 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46085 2023-07-23 22:10:20,054 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46085 2023-07-23 22:10:20,057 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:20,057 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:20,057 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:20,058 INFO [Listener at localhost/42675] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:20,059 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:20,059 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:20,059 INFO [Listener at localhost/42675] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:20,062 INFO [Listener at localhost/42675] http.HttpServer(1146): Jetty bound to port 40633 2023-07-23 22:10:20,062 INFO [Listener at localhost/42675] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:20,068 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:20,068 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5033ffc4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:20,069 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:20,069 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1eb685a1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:20,212 INFO [Listener at localhost/42675] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:20,214 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:20,214 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:20,214 INFO [Listener at localhost/42675] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 22:10:20,215 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:20,219 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@311e7d3c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir/jetty-0_0_0_0-40633-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4682121649150974348/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:20,220 INFO [Listener at localhost/42675] server.AbstractConnector(333): Started ServerConnector@efe2f50{HTTP/1.1, (http/1.1)}{0.0.0.0:40633} 2023-07-23 22:10:20,220 INFO [Listener at localhost/42675] server.Server(415): Started @7949ms 2023-07-23 22:10:20,234 INFO [Listener at localhost/42675] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:20,235 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:20,235 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:20,235 INFO [Listener at localhost/42675] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:20,236 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:20,236 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:20,236 INFO [Listener at localhost/42675] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:20,238 INFO [Listener at localhost/42675] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34191 2023-07-23 22:10:20,238 INFO [Listener at localhost/42675] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:20,240 DEBUG [Listener at localhost/42675] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:20,241 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:20,243 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:20,244 INFO [Listener at localhost/42675] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34191 connecting to ZooKeeper ensemble=127.0.0.1:52385 2023-07-23 22:10:20,248 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:341910x0, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:20,250 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34191-0x101943c28b20002 connected 2023-07-23 22:10:20,250 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:20,251 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:20,252 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:20,254 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34191 2023-07-23 22:10:20,255 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34191 2023-07-23 22:10:20,255 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34191 2023-07-23 22:10:20,255 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34191 2023-07-23 22:10:20,256 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34191 2023-07-23 22:10:20,259 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:20,259 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:20,259 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:20,259 INFO [Listener at localhost/42675] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:20,260 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:20,260 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:20,260 INFO [Listener at localhost/42675] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:20,261 INFO [Listener at localhost/42675] http.HttpServer(1146): Jetty bound to port 37977 2023-07-23 22:10:20,261 INFO [Listener at localhost/42675] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:20,271 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:20,271 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1f6d07f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:20,272 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:20,272 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fe3f683{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:20,388 INFO [Listener at localhost/42675] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:20,389 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:20,389 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:20,390 INFO [Listener at localhost/42675] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 22:10:20,390 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:20,391 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@617ee32b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir/jetty-0_0_0_0-37977-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5114259791790870754/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:20,392 INFO [Listener at localhost/42675] server.AbstractConnector(333): Started ServerConnector@7edcaee8{HTTP/1.1, (http/1.1)}{0.0.0.0:37977} 2023-07-23 22:10:20,393 INFO [Listener at localhost/42675] server.Server(415): Started @8121ms 2023-07-23 22:10:20,405 INFO [Listener at localhost/42675] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:20,405 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:20,405 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:20,405 INFO [Listener at localhost/42675] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:20,405 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:20,405 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:20,405 INFO [Listener at localhost/42675] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:20,407 INFO [Listener at localhost/42675] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41457 2023-07-23 22:10:20,407 INFO [Listener at localhost/42675] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:20,409 DEBUG [Listener at localhost/42675] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:20,411 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:20,412 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:20,413 INFO [Listener at localhost/42675] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41457 connecting to ZooKeeper ensemble=127.0.0.1:52385 2023-07-23 22:10:20,416 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:414570x0, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:20,418 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41457-0x101943c28b20003 connected 2023-07-23 22:10:20,418 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:20,419 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:20,419 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:20,420 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41457 2023-07-23 22:10:20,420 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41457 2023-07-23 22:10:20,420 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41457 2023-07-23 22:10:20,421 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41457 2023-07-23 22:10:20,421 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41457 2023-07-23 22:10:20,423 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:20,423 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:20,423 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:20,424 INFO [Listener at localhost/42675] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:20,424 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:20,424 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:20,424 INFO [Listener at localhost/42675] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:20,425 INFO [Listener at localhost/42675] http.HttpServer(1146): Jetty bound to port 41701 2023-07-23 22:10:20,425 INFO [Listener at localhost/42675] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:20,427 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:20,428 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1f7fbf09{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:20,428 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:20,428 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@505a01fa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:20,543 INFO [Listener at localhost/42675] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:20,544 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:20,544 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:20,545 INFO [Listener at localhost/42675] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 22:10:20,546 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:20,547 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@62311427{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir/jetty-0_0_0_0-41701-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3232203957916350540/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:20,548 INFO [Listener at localhost/42675] server.AbstractConnector(333): Started ServerConnector@77041a21{HTTP/1.1, (http/1.1)}{0.0.0.0:41701} 2023-07-23 22:10:20,548 INFO [Listener at localhost/42675] server.Server(415): Started @8277ms 2023-07-23 22:10:20,554 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:20,558 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@399a4cd{HTTP/1.1, (http/1.1)}{0.0.0.0:44897} 2023-07-23 22:10:20,558 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8287ms 2023-07-23 22:10:20,558 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:20,569 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 22:10:20,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:20,588 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:20,588 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:20,588 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:20,588 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:20,590 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:20,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 22:10:20,592 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37045,1690150218110 from backup master directory 2023-07-23 22:10:20,592 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 22:10:20,596 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:20,597 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 22:10:20,597 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:20,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:20,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-23 22:10:20,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-23 22:10:20,720 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/hbase.id with ID: 5f63c2c7-e6db-4025-95e9-260944de441a 2023-07-23 22:10:20,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:20,808 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:20,878 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3969437c to 127.0.0.1:52385 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:20,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ceff5b7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:20,953 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:20,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 22:10:20,982 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-23 22:10:20,982 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-23 22:10:20,985 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 22:10:20,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 22:10:20,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:21,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/data/master/store-tmp 2023-07-23 22:10:21,075 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:21,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 22:10:21,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:21,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:21,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 22:10:21,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:21,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:21,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 22:10:21,077 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/WALs/jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:21,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37045%2C1690150218110, suffix=, logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/WALs/jenkins-hbase4.apache.org,37045,1690150218110, archiveDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/oldWALs, maxLogs=10 2023-07-23 22:10:21,162 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK] 2023-07-23 22:10:21,162 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK] 2023-07-23 22:10:21,162 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK] 2023-07-23 22:10:21,170 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 22:10:21,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/WALs/jenkins-hbase4.apache.org,37045,1690150218110/jenkins-hbase4.apache.org%2C37045%2C1690150218110.1690150221111 2023-07-23 22:10:21,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK], DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK], DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK]] 2023-07-23 22:10:21,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:21,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:21,277 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:21,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:21,346 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:21,353 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 22:10:21,388 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 22:10:21,404 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:21,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:21,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:21,433 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:21,438 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:21,439 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10674754240, jitterRate=-0.005836039781570435}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:21,439 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 22:10:21,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 22:10:21,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 22:10:21,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 22:10:21,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 22:10:21,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-23 22:10:21,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 43 msec 2023-07-23 22:10:21,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 22:10:21,554 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 22:10:21,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 22:10:21,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 22:10:21,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 22:10:21,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 22:10:21,591 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:21,595 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 22:10:21,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 22:10:21,622 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 22:10:21,629 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:21,630 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:21,630 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:21,630 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:21,630 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:21,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37045,1690150218110, sessionid=0x101943c28b20000, setting cluster-up flag (Was=false) 2023-07-23 22:10:21,653 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:21,660 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 22:10:21,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:21,667 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:21,672 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 22:10:21,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:21,677 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.hbase-snapshot/.tmp 2023-07-23 22:10:21,754 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(951): ClusterId : 5f63c2c7-e6db-4025-95e9-260944de441a 2023-07-23 22:10:21,754 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(951): ClusterId : 5f63c2c7-e6db-4025-95e9-260944de441a 2023-07-23 22:10:21,755 INFO [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(951): ClusterId : 5f63c2c7-e6db-4025-95e9-260944de441a 2023-07-23 22:10:21,761 DEBUG [RS:1;jenkins-hbase4:34191] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:21,761 DEBUG [RS:2;jenkins-hbase4:41457] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:21,761 DEBUG [RS:0;jenkins-hbase4:46085] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:21,772 DEBUG [RS:2;jenkins-hbase4:41457] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:21,772 DEBUG [RS:1;jenkins-hbase4:34191] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:21,772 DEBUG [RS:0;jenkins-hbase4:46085] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:21,772 DEBUG [RS:2;jenkins-hbase4:41457] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:21,772 DEBUG [RS:0;jenkins-hbase4:46085] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:21,772 DEBUG [RS:1;jenkins-hbase4:34191] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:21,778 DEBUG [RS:1;jenkins-hbase4:34191] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:21,778 DEBUG [RS:0;jenkins-hbase4:46085] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:21,781 DEBUG [RS:1;jenkins-hbase4:34191] zookeeper.ReadOnlyZKClient(139): Connect 0x3d3ba0b4 to 127.0.0.1:52385 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:21,781 DEBUG [RS:0;jenkins-hbase4:46085] zookeeper.ReadOnlyZKClient(139): Connect 0x66764ba4 to 127.0.0.1:52385 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:21,781 DEBUG [RS:2;jenkins-hbase4:41457] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:21,786 DEBUG [RS:2;jenkins-hbase4:41457] zookeeper.ReadOnlyZKClient(139): Connect 0x02648456 to 127.0.0.1:52385 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:21,795 DEBUG [RS:0;jenkins-hbase4:46085] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@758c59ff, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:21,795 DEBUG [RS:1;jenkins-hbase4:34191] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6529a87a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:21,796 DEBUG [RS:0;jenkins-hbase4:46085] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71d4d691, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:21,796 DEBUG [RS:1;jenkins-hbase4:34191] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2af55344, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:21,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 22:10:21,799 DEBUG [RS:2;jenkins-hbase4:41457] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ca37a6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:21,799 DEBUG [RS:2;jenkins-hbase4:41457] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@89cef26, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:21,811 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 22:10:21,815 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:21,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 22:10:21,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 22:10:21,825 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41457 2023-07-23 22:10:21,827 DEBUG [RS:1;jenkins-hbase4:34191] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34191 2023-07-23 22:10:21,829 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46085 2023-07-23 22:10:21,834 INFO [RS:1;jenkins-hbase4:34191] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:21,834 INFO [RS:0;jenkins-hbase4:46085] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:21,835 INFO [RS:0;jenkins-hbase4:46085] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:21,834 INFO [RS:2;jenkins-hbase4:41457] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:21,835 INFO [RS:2;jenkins-hbase4:41457] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:21,835 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:21,834 INFO [RS:1;jenkins-hbase4:34191] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:21,835 DEBUG [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:21,835 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:21,839 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37045,1690150218110 with isa=jenkins-hbase4.apache.org/172.31.14.131:41457, startcode=1690150220404 2023-07-23 22:10:21,839 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37045,1690150218110 with isa=jenkins-hbase4.apache.org/172.31.14.131:46085, startcode=1690150220016 2023-07-23 22:10:21,839 INFO [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37045,1690150218110 with isa=jenkins-hbase4.apache.org/172.31.14.131:34191, startcode=1690150220233 2023-07-23 22:10:21,864 DEBUG [RS:1;jenkins-hbase4:34191] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:21,864 DEBUG [RS:0;jenkins-hbase4:46085] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:21,864 DEBUG [RS:2;jenkins-hbase4:41457] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:21,941 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 22:10:21,954 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49035, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:21,955 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49043, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:21,954 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53447, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:21,966 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:21,979 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:21,981 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:21,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 22:10:22,019 DEBUG [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 22:10:22,019 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 22:10:22,019 WARN [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 22:10:22,019 WARN [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 22:10:22,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 22:10:22,020 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 22:10:22,020 WARN [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 22:10:22,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 22:10:22,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 22:10:22,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:22,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:22,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:22,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:22,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 22:10:22,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:22,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690150252043 2023-07-23 22:10:22,047 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 22:10:22,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 22:10:22,055 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 22:10:22,058 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 22:10:22,061 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:22,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 22:10:22,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 22:10:22,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 22:10:22,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 22:10:22,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,067 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 22:10:22,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 22:10:22,070 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 22:10:22,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 22:10:22,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 22:10:22,078 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150222076,5,FailOnTimeoutGroup] 2023-07-23 22:10:22,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150222079,5,FailOnTimeoutGroup] 2023-07-23 22:10:22,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,079 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 22:10:22,081 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,123 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37045,1690150218110 with isa=jenkins-hbase4.apache.org/172.31.14.131:46085, startcode=1690150220016 2023-07-23 22:10:22,124 INFO [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37045,1690150218110 with isa=jenkins-hbase4.apache.org/172.31.14.131:34191, startcode=1690150220233 2023-07-23 22:10:22,125 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37045,1690150218110 with isa=jenkins-hbase4.apache.org/172.31.14.131:41457, startcode=1690150220404 2023-07-23 22:10:22,133 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37045] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:22,138 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:22,140 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 22:10:22,152 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37045] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:22,153 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:22,153 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 22:10:22,153 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9 2023-07-23 22:10:22,154 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36271 2023-07-23 22:10:22,154 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38931 2023-07-23 22:10:22,155 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37045] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:22,155 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:22,155 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 22:10:22,174 DEBUG [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9 2023-07-23 22:10:22,174 DEBUG [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36271 2023-07-23 22:10:22,174 DEBUG [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38931 2023-07-23 22:10:22,183 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9 2023-07-23 22:10:22,183 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36271 2023-07-23 22:10:22,183 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38931 2023-07-23 22:10:22,187 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:22,187 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:22,189 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:22,195 DEBUG [RS:1;jenkins-hbase4:34191] zookeeper.ZKUtil(162): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:22,195 DEBUG [RS:0;jenkins-hbase4:46085] zookeeper.ZKUtil(162): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:22,196 DEBUG [RS:2;jenkins-hbase4:41457] zookeeper.ZKUtil(162): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:22,189 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9 2023-07-23 22:10:22,196 WARN [RS:2;jenkins-hbase4:41457] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:22,202 INFO [RS:2;jenkins-hbase4:41457] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:22,196 WARN [RS:0;jenkins-hbase4:46085] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:22,195 WARN [RS:1;jenkins-hbase4:34191] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:22,202 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:22,203 INFO [RS:1;jenkins-hbase4:34191] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:22,204 DEBUG [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:22,203 INFO [RS:0;jenkins-hbase4:46085] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:22,204 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:22,208 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41457,1690150220404] 2023-07-23 22:10:22,208 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46085,1690150220016] 2023-07-23 22:10:22,208 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34191,1690150220233] 2023-07-23 22:10:22,312 DEBUG [RS:0;jenkins-hbase4:46085] zookeeper.ZKUtil(162): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:22,312 DEBUG [RS:2;jenkins-hbase4:41457] zookeeper.ZKUtil(162): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:22,313 DEBUG [RS:1;jenkins-hbase4:34191] zookeeper.ZKUtil(162): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:22,313 DEBUG [RS:1;jenkins-hbase4:34191] zookeeper.ZKUtil(162): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:22,314 DEBUG [RS:1;jenkins-hbase4:34191] zookeeper.ZKUtil(162): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:22,322 DEBUG [RS:0;jenkins-hbase4:46085] zookeeper.ZKUtil(162): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:22,323 DEBUG [RS:0;jenkins-hbase4:46085] zookeeper.ZKUtil(162): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:22,330 DEBUG [RS:2;jenkins-hbase4:41457] zookeeper.ZKUtil(162): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:22,331 DEBUG [RS:2;jenkins-hbase4:41457] zookeeper.ZKUtil(162): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:22,345 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:22,345 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:22,345 DEBUG [RS:1;jenkins-hbase4:34191] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:22,378 INFO [RS:0;jenkins-hbase4:46085] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:22,393 INFO [RS:2;jenkins-hbase4:41457] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:22,415 INFO [RS:1;jenkins-hbase4:34191] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:22,448 INFO [RS:2;jenkins-hbase4:41457] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:22,451 INFO [RS:1;jenkins-hbase4:34191] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:22,455 INFO [RS:1;jenkins-hbase4:34191] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:22,455 INFO [RS:1;jenkins-hbase4:34191] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,471 INFO [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:22,459 INFO [RS:2;jenkins-hbase4:41457] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:22,473 INFO [RS:0;jenkins-hbase4:46085] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:22,472 INFO [RS:2;jenkins-hbase4:41457] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,474 INFO [RS:0;jenkins-hbase4:46085] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:22,474 INFO [RS:0;jenkins-hbase4:46085] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,480 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:22,481 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:22,484 INFO [RS:1;jenkins-hbase4:34191] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,484 INFO [RS:2;jenkins-hbase4:41457] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,484 INFO [RS:0;jenkins-hbase4:46085] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,485 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,485 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,485 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,485 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,485 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:22,486 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:22,486 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,486 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,487 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,487 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,487 DEBUG [RS:2;jenkins-hbase4:41457] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,487 DEBUG [RS:0;jenkins-hbase4:46085] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,487 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,487 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,487 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:22,487 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,488 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,488 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,488 DEBUG [RS:1;jenkins-hbase4:34191] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:22,500 INFO [RS:1;jenkins-hbase4:34191] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,501 INFO [RS:1;jenkins-hbase4:34191] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,501 INFO [RS:1;jenkins-hbase4:34191] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,503 INFO [RS:0;jenkins-hbase4:46085] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,504 INFO [RS:0;jenkins-hbase4:46085] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,504 INFO [RS:0;jenkins-hbase4:46085] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,507 INFO [RS:2;jenkins-hbase4:41457] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,507 INFO [RS:2;jenkins-hbase4:41457] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,507 INFO [RS:2;jenkins-hbase4:41457] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,523 INFO [RS:2;jenkins-hbase4:41457] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:22,525 INFO [RS:0;jenkins-hbase4:46085] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:22,523 INFO [RS:1;jenkins-hbase4:34191] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:22,528 INFO [RS:0;jenkins-hbase4:46085] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46085,1690150220016-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,529 INFO [RS:1;jenkins-hbase4:34191] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34191,1690150220233-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,529 INFO [RS:2;jenkins-hbase4:41457] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41457,1690150220404-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:22,552 INFO [RS:0;jenkins-hbase4:46085] regionserver.Replication(203): jenkins-hbase4.apache.org,46085,1690150220016 started 2023-07-23 22:10:22,552 INFO [RS:2;jenkins-hbase4:41457] regionserver.Replication(203): jenkins-hbase4.apache.org,41457,1690150220404 started 2023-07-23 22:10:22,552 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46085,1690150220016, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46085, sessionid=0x101943c28b20001 2023-07-23 22:10:22,552 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41457,1690150220404, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41457, sessionid=0x101943c28b20003 2023-07-23 22:10:22,553 INFO [RS:1;jenkins-hbase4:34191] regionserver.Replication(203): jenkins-hbase4.apache.org,34191,1690150220233 started 2023-07-23 22:10:22,553 DEBUG [RS:0;jenkins-hbase4:46085] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:22,553 DEBUG [RS:2;jenkins-hbase4:41457] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:22,553 DEBUG [RS:0;jenkins-hbase4:46085] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:22,553 INFO [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34191,1690150220233, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34191, sessionid=0x101943c28b20002 2023-07-23 22:10:22,554 DEBUG [RS:0;jenkins-hbase4:46085] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46085,1690150220016' 2023-07-23 22:10:22,554 DEBUG [RS:1;jenkins-hbase4:34191] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:22,554 DEBUG [RS:1;jenkins-hbase4:34191] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:22,553 DEBUG [RS:2;jenkins-hbase4:41457] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:22,554 DEBUG [RS:1;jenkins-hbase4:34191] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34191,1690150220233' 2023-07-23 22:10:22,555 DEBUG [RS:1;jenkins-hbase4:34191] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:22,554 DEBUG [RS:0;jenkins-hbase4:46085] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:22,555 DEBUG [RS:2;jenkins-hbase4:41457] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41457,1690150220404' 2023-07-23 22:10:22,555 DEBUG [RS:2;jenkins-hbase4:41457] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:22,575 DEBUG [RS:1;jenkins-hbase4:34191] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:22,576 DEBUG [RS:1;jenkins-hbase4:34191] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:22,576 DEBUG [RS:1;jenkins-hbase4:34191] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:22,576 DEBUG [RS:1;jenkins-hbase4:34191] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:22,576 DEBUG [RS:1;jenkins-hbase4:34191] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34191,1690150220233' 2023-07-23 22:10:22,576 DEBUG [RS:1;jenkins-hbase4:34191] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:22,577 DEBUG [RS:1;jenkins-hbase4:34191] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:22,577 DEBUG [RS:1;jenkins-hbase4:34191] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:22,578 INFO [RS:1;jenkins-hbase4:34191] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 22:10:22,578 INFO [RS:1;jenkins-hbase4:34191] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 22:10:22,579 DEBUG [RS:0;jenkins-hbase4:46085] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:22,579 DEBUG [RS:2;jenkins-hbase4:41457] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:22,579 DEBUG [RS:2;jenkins-hbase4:41457] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:22,579 DEBUG [RS:0;jenkins-hbase4:46085] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:22,580 DEBUG [RS:2;jenkins-hbase4:41457] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:22,580 DEBUG [RS:0;jenkins-hbase4:46085] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:22,580 DEBUG [RS:2;jenkins-hbase4:41457] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:22,581 DEBUG [RS:0;jenkins-hbase4:46085] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:22,581 DEBUG [RS:2;jenkins-hbase4:41457] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41457,1690150220404' 2023-07-23 22:10:22,581 DEBUG [RS:2;jenkins-hbase4:41457] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:22,581 DEBUG [RS:0;jenkins-hbase4:46085] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46085,1690150220016' 2023-07-23 22:10:22,581 DEBUG [RS:0;jenkins-hbase4:46085] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:22,582 DEBUG [RS:2;jenkins-hbase4:41457] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:22,582 DEBUG [RS:2;jenkins-hbase4:41457] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:22,582 DEBUG [RS:0;jenkins-hbase4:46085] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:22,582 INFO [RS:2;jenkins-hbase4:41457] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 22:10:22,583 INFO [RS:2;jenkins-hbase4:41457] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 22:10:22,584 DEBUG [RS:0;jenkins-hbase4:46085] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:22,584 INFO [RS:0;jenkins-hbase4:46085] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 22:10:22,584 INFO [RS:0;jenkins-hbase4:46085] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 22:10:22,696 INFO [RS:2;jenkins-hbase4:41457] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41457%2C1690150220404, suffix=, logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,41457,1690150220404, archiveDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs, maxLogs=32 2023-07-23 22:10:22,698 INFO [RS:0;jenkins-hbase4:46085] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46085%2C1690150220016, suffix=, logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,46085,1690150220016, archiveDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs, maxLogs=32 2023-07-23 22:10:22,703 INFO [RS:1;jenkins-hbase4:34191] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34191%2C1690150220233, suffix=, logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,34191,1690150220233, archiveDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs, maxLogs=32 2023-07-23 22:10:22,763 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK] 2023-07-23 22:10:22,831 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK] 2023-07-23 22:10:22,855 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:22,888 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK] 2023-07-23 22:10:22,889 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK] 2023-07-23 22:10:22,888 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK] 2023-07-23 22:10:22,889 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK] 2023-07-23 22:10:22,895 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 22:10:22,900 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK] 2023-07-23 22:10:22,903 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK] 2023-07-23 22:10:22,903 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK] 2023-07-23 22:10:22,915 INFO [RS:2;jenkins-hbase4:41457] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,41457,1690150220404/jenkins-hbase4.apache.org%2C41457%2C1690150220404.1690150222702 2023-07-23 22:10:22,915 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/info 2023-07-23 22:10:22,915 INFO [RS:1;jenkins-hbase4:34191] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,34191,1690150220233/jenkins-hbase4.apache.org%2C34191%2C1690150220233.1690150222705 2023-07-23 22:10:22,916 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 22:10:22,917 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:22,917 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 22:10:22,919 DEBUG [RS:2;jenkins-hbase4:41457] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK], DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK], DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK]] 2023-07-23 22:10:22,920 DEBUG [RS:1;jenkins-hbase4:34191] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK], DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK], DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK]] 2023-07-23 22:10:22,920 INFO [RS:0;jenkins-hbase4:46085] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,46085,1690150220016/jenkins-hbase4.apache.org%2C46085%2C1690150220016.1690150222702 2023-07-23 22:10:22,921 DEBUG [RS:0;jenkins-hbase4:46085] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK], DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK], DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK]] 2023-07-23 22:10:22,921 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/rep_barrier 2023-07-23 22:10:22,922 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 22:10:22,923 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:22,923 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 22:10:22,926 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/table 2023-07-23 22:10:22,927 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 22:10:22,929 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:22,930 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740 2023-07-23 22:10:22,932 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740 2023-07-23 22:10:22,936 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 22:10:22,939 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 22:10:22,945 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:22,950 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11006599680, jitterRate=0.025069475173950195}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 22:10:22,950 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 22:10:22,950 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 22:10:22,950 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 22:10:22,950 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 22:10:22,950 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 22:10:22,950 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 22:10:22,955 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 22:10:22,955 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 22:10:22,963 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 22:10:22,963 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 22:10:22,972 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 22:10:22,985 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 22:10:22,988 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 22:10:23,140 DEBUG [jenkins-hbase4:37045] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 22:10:23,164 DEBUG [jenkins-hbase4:37045] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:23,165 DEBUG [jenkins-hbase4:37045] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:23,165 DEBUG [jenkins-hbase4:37045] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:23,165 DEBUG [jenkins-hbase4:37045] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:23,165 DEBUG [jenkins-hbase4:37045] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:23,170 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46085,1690150220016, state=OPENING 2023-07-23 22:10:23,178 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 22:10:23,179 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:23,180 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 22:10:23,184 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:23,382 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:23,385 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:23,390 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47392, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:23,414 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 22:10:23,414 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:23,418 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46085%2C1690150220016.meta, suffix=.meta, logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,46085,1690150220016, archiveDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs, maxLogs=32 2023-07-23 22:10:23,456 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK] 2023-07-23 22:10:23,457 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK] 2023-07-23 22:10:23,463 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK] 2023-07-23 22:10:23,474 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,46085,1690150220016/jenkins-hbase4.apache.org%2C46085%2C1690150220016.meta.1690150223419.meta 2023-07-23 22:10:23,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK], DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK], DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK]] 2023-07-23 22:10:23,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:23,485 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 22:10:23,488 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 22:10:23,490 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 22:10:23,496 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 22:10:23,496 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:23,496 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 22:10:23,497 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 22:10:23,501 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 22:10:23,503 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/info 2023-07-23 22:10:23,503 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/info 2023-07-23 22:10:23,504 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 22:10:23,505 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:23,505 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 22:10:23,507 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/rep_barrier 2023-07-23 22:10:23,507 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/rep_barrier 2023-07-23 22:10:23,507 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 22:10:23,508 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:23,509 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 22:10:23,510 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/table 2023-07-23 22:10:23,510 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/table 2023-07-23 22:10:23,511 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 22:10:23,512 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:23,518 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740 2023-07-23 22:10:23,522 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740 2023-07-23 22:10:23,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 22:10:23,529 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 22:10:23,530 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11882802080, jitterRate=0.10667218267917633}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 22:10:23,530 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 22:10:23,541 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690150223374 2023-07-23 22:10:23,566 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 22:10:23,567 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 22:10:23,568 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46085,1690150220016, state=OPEN 2023-07-23 22:10:23,571 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 22:10:23,571 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 22:10:23,576 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 22:10:23,576 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46085,1690150220016 in 387 msec 2023-07-23 22:10:23,584 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 22:10:23,585 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 605 msec 2023-07-23 22:10:23,600 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.7650 sec 2023-07-23 22:10:23,600 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690150223600, completionTime=-1 2023-07-23 22:10:23,600 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 22:10:23,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 22:10:23,666 DEBUG [hconnection-0x126adeaf-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:23,674 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47400, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:23,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 22:10:23,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690150283703 2023-07-23 22:10:23,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690150343703 2023-07-23 22:10:23,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 102 msec 2023-07-23 22:10:23,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37045,1690150218110-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:23,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37045,1690150218110-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:23,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37045,1690150218110-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:23,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37045, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:23,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:23,739 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 22:10:23,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 22:10:23,751 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:23,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 22:10:23,766 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:23,770 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:23,793 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:23,798 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2 empty. 2023-07-23 22:10:23,799 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:23,799 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 22:10:23,842 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:23,848 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4720e37820d079fec06cb3ab19dd54a2, NAME => 'hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:23,868 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:23,868 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 4720e37820d079fec06cb3ab19dd54a2, disabling compactions & flushes 2023-07-23 22:10:23,869 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:23,869 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:23,869 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. after waiting 0 ms 2023-07-23 22:10:23,869 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:23,869 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:23,869 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 4720e37820d079fec06cb3ab19dd54a2: 2023-07-23 22:10:23,874 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:23,889 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690150223877"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150223877"}]},"ts":"1690150223877"} 2023-07-23 22:10:23,924 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:23,926 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:23,932 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150223927"}]},"ts":"1690150223927"} 2023-07-23 22:10:23,936 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 22:10:23,941 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:23,941 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:23,941 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:23,941 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:23,941 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:23,943 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4720e37820d079fec06cb3ab19dd54a2, ASSIGN}] 2023-07-23 22:10:23,947 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=4720e37820d079fec06cb3ab19dd54a2, ASSIGN 2023-07-23 22:10:23,949 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=4720e37820d079fec06cb3ab19dd54a2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:24,100 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:24,102 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4720e37820d079fec06cb3ab19dd54a2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:24,102 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690150224101"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150224101"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150224101"}]},"ts":"1690150224101"} 2023-07-23 22:10:24,108 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 4720e37820d079fec06cb3ab19dd54a2, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:24,126 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37045,1690150218110] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:24,129 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37045,1690150218110] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 22:10:24,133 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:24,135 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:24,138 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:24,139 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103 empty. 2023-07-23 22:10:24,140 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:24,140 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 22:10:24,171 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:24,173 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f13ed9a05b812dd1ab7a8c5d46530103, NAME => 'hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:24,229 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:24,229 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing f13ed9a05b812dd1ab7a8c5d46530103, disabling compactions & flushes 2023-07-23 22:10:24,229 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:24,229 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:24,229 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. after waiting 0 ms 2023-07-23 22:10:24,229 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:24,229 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:24,229 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for f13ed9a05b812dd1ab7a8c5d46530103: 2023-07-23 22:10:24,235 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:24,237 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150224237"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150224237"}]},"ts":"1690150224237"} 2023-07-23 22:10:24,240 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:24,242 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:24,243 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150224242"}]},"ts":"1690150224242"} 2023-07-23 22:10:24,245 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 22:10:24,252 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:24,252 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:24,252 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:24,252 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:24,252 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:24,252 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f13ed9a05b812dd1ab7a8c5d46530103, ASSIGN}] 2023-07-23 22:10:24,255 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f13ed9a05b812dd1ab7a8c5d46530103, ASSIGN 2023-07-23 22:10:24,257 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f13ed9a05b812dd1ab7a8c5d46530103, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:24,270 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:24,270 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4720e37820d079fec06cb3ab19dd54a2, NAME => 'hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:24,271 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:24,271 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:24,271 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:24,272 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:24,274 INFO [StoreOpener-4720e37820d079fec06cb3ab19dd54a2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:24,277 DEBUG [StoreOpener-4720e37820d079fec06cb3ab19dd54a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2/info 2023-07-23 22:10:24,277 DEBUG [StoreOpener-4720e37820d079fec06cb3ab19dd54a2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2/info 2023-07-23 22:10:24,277 INFO [StoreOpener-4720e37820d079fec06cb3ab19dd54a2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4720e37820d079fec06cb3ab19dd54a2 columnFamilyName info 2023-07-23 22:10:24,278 INFO [StoreOpener-4720e37820d079fec06cb3ab19dd54a2-1] regionserver.HStore(310): Store=4720e37820d079fec06cb3ab19dd54a2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:24,280 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:24,281 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:24,288 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:24,291 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:24,291 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4720e37820d079fec06cb3ab19dd54a2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11602517120, jitterRate=0.08056861162185669}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:24,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4720e37820d079fec06cb3ab19dd54a2: 2023-07-23 22:10:24,293 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2., pid=6, masterSystemTime=1690150224262 2023-07-23 22:10:24,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:24,298 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:24,299 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=4720e37820d079fec06cb3ab19dd54a2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:24,299 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690150224299"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150224299"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150224299"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150224299"}]},"ts":"1690150224299"} 2023-07-23 22:10:24,307 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-23 22:10:24,307 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 4720e37820d079fec06cb3ab19dd54a2, server=jenkins-hbase4.apache.org,46085,1690150220016 in 195 msec 2023-07-23 22:10:24,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-23 22:10:24,312 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=4720e37820d079fec06cb3ab19dd54a2, ASSIGN in 364 msec 2023-07-23 22:10:24,313 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:24,313 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150224313"}]},"ts":"1690150224313"} 2023-07-23 22:10:24,316 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 22:10:24,320 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:24,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 568 msec 2023-07-23 22:10:24,365 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 22:10:24,366 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:24,366 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:24,407 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:24,409 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=f13ed9a05b812dd1ab7a8c5d46530103, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:24,409 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150224409"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150224409"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150224409"}]},"ts":"1690150224409"} 2023-07-23 22:10:24,414 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=8, state=RUNNABLE; OpenRegionProcedure f13ed9a05b812dd1ab7a8c5d46530103, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:24,414 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 22:10:24,440 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:24,447 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 47 msec 2023-07-23 22:10:24,460 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 22:10:24,464 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-23 22:10:24,464 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 22:10:24,573 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:24,573 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:24,577 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38184, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:24,582 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:24,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f13ed9a05b812dd1ab7a8c5d46530103, NAME => 'hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:24,583 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 22:10:24,583 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. service=MultiRowMutationService 2023-07-23 22:10:24,584 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 22:10:24,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:24,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:24,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:24,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:24,586 INFO [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:24,588 DEBUG [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/m 2023-07-23 22:10:24,588 DEBUG [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/m 2023-07-23 22:10:24,589 INFO [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f13ed9a05b812dd1ab7a8c5d46530103 columnFamilyName m 2023-07-23 22:10:24,590 INFO [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] regionserver.HStore(310): Store=f13ed9a05b812dd1ab7a8c5d46530103/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:24,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:24,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:24,599 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:24,603 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:24,604 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f13ed9a05b812dd1ab7a8c5d46530103; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7f7f8b89, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:24,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f13ed9a05b812dd1ab7a8c5d46530103: 2023-07-23 22:10:24,606 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103., pid=10, masterSystemTime=1690150224573 2023-07-23 22:10:24,610 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:24,611 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:24,612 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=f13ed9a05b812dd1ab7a8c5d46530103, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:24,613 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150224612"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150224612"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150224612"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150224612"}]},"ts":"1690150224612"} 2023-07-23 22:10:24,619 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=8 2023-07-23 22:10:24,619 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=8, state=SUCCESS; OpenRegionProcedure f13ed9a05b812dd1ab7a8c5d46530103, server=jenkins-hbase4.apache.org,41457,1690150220404 in 202 msec 2023-07-23 22:10:24,624 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-23 22:10:24,624 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f13ed9a05b812dd1ab7a8c5d46530103, ASSIGN in 367 msec 2023-07-23 22:10:24,646 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:24,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 203 msec 2023-07-23 22:10:24,666 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:24,667 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150224667"}]},"ts":"1690150224667"} 2023-07-23 22:10:24,676 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 22:10:24,681 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:24,681 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 22:10:24,684 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 555 msec 2023-07-23 22:10:24,685 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 22:10:24,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 4.088sec 2023-07-23 22:10:24,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-23 22:10:24,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 22:10:24,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 22:10:24,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37045,1690150218110-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 22:10:24,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37045,1690150218110-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 22:10:24,707 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 22:10:24,740 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37045,1690150218110] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:24,741 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38194, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:24,744 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 22:10:24,744 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 22:10:24,764 DEBUG [Listener at localhost/42675] zookeeper.ReadOnlyZKClient(139): Connect 0x2ea64db6 to 127.0.0.1:52385 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:24,774 DEBUG [Listener at localhost/42675] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e4d79c0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:24,799 DEBUG [hconnection-0x3ed81975-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:24,813 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47406, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:24,833 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:24,833 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:24,833 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:24,835 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:24,837 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 22:10:24,846 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 22:10:24,849 DEBUG [Listener at localhost/42675] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 22:10:24,854 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53220, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 22:10:24,871 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 22:10:24,871 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:24,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 22:10:24,878 DEBUG [Listener at localhost/42675] zookeeper.ReadOnlyZKClient(139): Connect 0x1b1b113f to 127.0.0.1:52385 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:24,889 DEBUG [Listener at localhost/42675] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13aa6d9f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:24,890 INFO [Listener at localhost/42675] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:52385 2023-07-23 22:10:24,897 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:24,899 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101943c28b2000a connected 2023-07-23 22:10:24,936 INFO [Listener at localhost/42675] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=420, OpenFileDescriptor=675, MaxFileDescriptor=60000, SystemLoadAverage=465, ProcessCount=178, AvailableMemoryMB=6533 2023-07-23 22:10:24,938 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-23 22:10:24,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:24,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:25,026 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 22:10:25,040 INFO [Listener at localhost/42675] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:25,040 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:25,040 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:25,040 INFO [Listener at localhost/42675] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:25,040 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:25,041 INFO [Listener at localhost/42675] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:25,041 INFO [Listener at localhost/42675] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:25,050 INFO [Listener at localhost/42675] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39885 2023-07-23 22:10:25,050 INFO [Listener at localhost/42675] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:25,058 DEBUG [Listener at localhost/42675] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:25,060 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:25,064 INFO [Listener at localhost/42675] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:25,069 INFO [Listener at localhost/42675] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39885 connecting to ZooKeeper ensemble=127.0.0.1:52385 2023-07-23 22:10:25,077 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:398850x0, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:25,078 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(162): regionserver:398850x0, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 22:10:25,080 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(162): regionserver:398850x0, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 22:10:25,081 DEBUG [Listener at localhost/42675] zookeeper.ZKUtil(164): regionserver:398850x0, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:25,091 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39885-0x101943c28b2000b connected 2023-07-23 22:10:25,091 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39885 2023-07-23 22:10:25,091 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39885 2023-07-23 22:10:25,092 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39885 2023-07-23 22:10:25,095 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39885 2023-07-23 22:10:25,097 DEBUG [Listener at localhost/42675] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39885 2023-07-23 22:10:25,099 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:25,099 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:25,100 INFO [Listener at localhost/42675] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:25,100 INFO [Listener at localhost/42675] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:25,101 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:25,101 INFO [Listener at localhost/42675] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:25,101 INFO [Listener at localhost/42675] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:25,102 INFO [Listener at localhost/42675] http.HttpServer(1146): Jetty bound to port 35271 2023-07-23 22:10:25,102 INFO [Listener at localhost/42675] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:25,109 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:25,109 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2c02caab{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:25,109 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:25,110 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6dd20c46{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:25,246 INFO [Listener at localhost/42675] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:25,247 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:25,247 INFO [Listener at localhost/42675] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:25,248 INFO [Listener at localhost/42675] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 22:10:25,252 INFO [Listener at localhost/42675] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:25,256 INFO [Listener at localhost/42675] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@780935ef{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/java.io.tmpdir/jetty-0_0_0_0-35271-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7036917203338723615/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:25,262 INFO [Listener at localhost/42675] server.AbstractConnector(333): Started ServerConnector@6057e31f{HTTP/1.1, (http/1.1)}{0.0.0.0:35271} 2023-07-23 22:10:25,262 INFO [Listener at localhost/42675] server.Server(415): Started @12991ms 2023-07-23 22:10:25,267 INFO [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(951): ClusterId : 5f63c2c7-e6db-4025-95e9-260944de441a 2023-07-23 22:10:25,268 DEBUG [RS:3;jenkins-hbase4:39885] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:25,271 DEBUG [RS:3;jenkins-hbase4:39885] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:25,271 DEBUG [RS:3;jenkins-hbase4:39885] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:25,274 DEBUG [RS:3;jenkins-hbase4:39885] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:25,276 DEBUG [RS:3;jenkins-hbase4:39885] zookeeper.ReadOnlyZKClient(139): Connect 0x4f9c4082 to 127.0.0.1:52385 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:25,301 DEBUG [RS:3;jenkins-hbase4:39885] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@415ba701, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:25,302 DEBUG [RS:3;jenkins-hbase4:39885] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45642bf2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:25,316 DEBUG [RS:3;jenkins-hbase4:39885] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:39885 2023-07-23 22:10:25,316 INFO [RS:3;jenkins-hbase4:39885] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:25,316 INFO [RS:3;jenkins-hbase4:39885] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:25,316 DEBUG [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:25,317 INFO [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37045,1690150218110 with isa=jenkins-hbase4.apache.org/172.31.14.131:39885, startcode=1690150225039 2023-07-23 22:10:25,317 DEBUG [RS:3;jenkins-hbase4:39885] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:25,321 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54469, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:25,322 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37045] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:25,322 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:25,323 DEBUG [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9 2023-07-23 22:10:25,323 DEBUG [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36271 2023-07-23 22:10:25,323 DEBUG [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38931 2023-07-23 22:10:25,328 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:25,329 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:25,329 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:25,329 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:25,330 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39885,1690150225039] 2023-07-23 22:10:25,330 DEBUG [RS:3;jenkins-hbase4:39885] zookeeper.ZKUtil(162): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:25,330 WARN [RS:3;jenkins-hbase4:39885] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:25,330 INFO [RS:3;jenkins-hbase4:39885] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:25,331 DEBUG [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:25,331 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:25,331 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:25,332 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:25,332 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:25,332 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:25,334 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:25,334 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:25,340 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:25,342 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:25,343 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:25,343 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:25,343 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:25,343 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 22:10:25,354 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37045,1690150218110] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 22:10:25,354 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:25,358 DEBUG [RS:3;jenkins-hbase4:39885] zookeeper.ZKUtil(162): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:25,359 DEBUG [RS:3;jenkins-hbase4:39885] zookeeper.ZKUtil(162): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:25,359 DEBUG [RS:3;jenkins-hbase4:39885] zookeeper.ZKUtil(162): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:25,360 DEBUG [RS:3;jenkins-hbase4:39885] zookeeper.ZKUtil(162): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:25,361 DEBUG [RS:3;jenkins-hbase4:39885] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:25,361 INFO [RS:3;jenkins-hbase4:39885] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:25,366 INFO [RS:3;jenkins-hbase4:39885] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:25,372 INFO [RS:3;jenkins-hbase4:39885] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:25,373 INFO [RS:3;jenkins-hbase4:39885] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:25,373 INFO [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:25,375 INFO [RS:3;jenkins-hbase4:39885] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:25,375 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:25,376 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:25,376 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:25,376 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:25,376 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:25,376 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:25,376 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:25,376 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:25,376 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:25,376 DEBUG [RS:3;jenkins-hbase4:39885] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:25,378 INFO [RS:3;jenkins-hbase4:39885] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:25,378 INFO [RS:3;jenkins-hbase4:39885] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:25,378 INFO [RS:3;jenkins-hbase4:39885] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:25,392 INFO [RS:3;jenkins-hbase4:39885] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:25,393 INFO [RS:3;jenkins-hbase4:39885] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39885,1690150225039-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:25,409 INFO [RS:3;jenkins-hbase4:39885] regionserver.Replication(203): jenkins-hbase4.apache.org,39885,1690150225039 started 2023-07-23 22:10:25,409 INFO [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39885,1690150225039, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39885, sessionid=0x101943c28b2000b 2023-07-23 22:10:25,409 DEBUG [RS:3;jenkins-hbase4:39885] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:25,409 DEBUG [RS:3;jenkins-hbase4:39885] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:25,409 DEBUG [RS:3;jenkins-hbase4:39885] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39885,1690150225039' 2023-07-23 22:10:25,409 DEBUG [RS:3;jenkins-hbase4:39885] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:25,413 DEBUG [RS:3;jenkins-hbase4:39885] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:25,413 DEBUG [RS:3;jenkins-hbase4:39885] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:25,413 DEBUG [RS:3;jenkins-hbase4:39885] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:25,413 DEBUG [RS:3;jenkins-hbase4:39885] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:25,413 DEBUG [RS:3;jenkins-hbase4:39885] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39885,1690150225039' 2023-07-23 22:10:25,414 DEBUG [RS:3;jenkins-hbase4:39885] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:25,414 DEBUG [RS:3;jenkins-hbase4:39885] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:25,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:25,415 DEBUG [RS:3;jenkins-hbase4:39885] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:25,415 INFO [RS:3;jenkins-hbase4:39885] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 22:10:25,415 INFO [RS:3;jenkins-hbase4:39885] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 22:10:25,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:25,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:25,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:25,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:25,429 DEBUG [hconnection-0x30296f6d-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:25,436 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47418, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:25,442 DEBUG [hconnection-0x30296f6d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:25,446 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38202, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:25,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:25,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:25,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:25,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:25,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:53220 deadline: 1690151425459, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:25,461 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:25,464 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:25,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:25,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:25,466 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:25,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:25,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:25,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:25,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:25,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:25,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:25,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:25,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:25,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:25,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:25,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:25,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:25,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:25,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:25,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:25,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:25,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:25,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 22:10:25,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039] are moved back to default 2023-07-23 22:10:25,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:25,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:25,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:25,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:25,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:25,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:25,518 INFO [RS:3;jenkins-hbase4:39885] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39885%2C1690150225039, suffix=, logDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,39885,1690150225039, archiveDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs, maxLogs=32 2023-07-23 22:10:25,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:25,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:25,531 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:25,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-23 22:10:25,552 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:25,553 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:25,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 22:10:25,562 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:25,564 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:25,569 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK] 2023-07-23 22:10:25,576 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK] 2023-07-23 22:10:25,578 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK] 2023-07-23 22:10:25,584 INFO [RS:3;jenkins-hbase4:39885] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/WALs/jenkins-hbase4.apache.org,39885,1690150225039/jenkins-hbase4.apache.org%2C39885%2C1690150225039.1690150225520 2023-07-23 22:10:25,586 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:25,592 DEBUG [RS:3;jenkins-hbase4:39885] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46449,DS-31d3704b-bd83-48de-891f-b615b01573de,DISK], DatanodeInfoWithStorage[127.0.0.1:38869,DS-24aad250-36de-4713-bdf0-09f9b911a9f6,DISK], DatanodeInfoWithStorage[127.0.0.1:35551,DS-20f367d0-1bf6-47cd-b1ac-ff2b640f1ccc,DISK]] 2023-07-23 22:10:25,596 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:25,596 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:25,596 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:25,598 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6 empty. 2023-07-23 22:10:25,598 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:25,598 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:25,598 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6 empty. 2023-07-23 22:10:25,598 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f empty. 2023-07-23 22:10:25,599 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51 empty. 2023-07-23 22:10:25,599 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:25,599 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:25,599 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:25,599 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629 empty. 2023-07-23 22:10:25,600 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:25,600 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:25,600 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 22:10:25,634 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:25,636 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3e658fd82f1735e6295ab1c4733e049f, NAME => 'Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:25,640 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 73c7538ece06421485a9cd39e3d07ba6, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:25,644 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => ada1271a9b28f1f411b60809ea5570d6, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:25,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 22:10:25,721 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:25,722 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 3e658fd82f1735e6295ab1c4733e049f, disabling compactions & flushes 2023-07-23 22:10:25,722 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:25,722 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:25,722 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. after waiting 0 ms 2023-07-23 22:10:25,722 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:25,722 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:25,723 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 73c7538ece06421485a9cd39e3d07ba6, disabling compactions & flushes 2023-07-23 22:10:25,723 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:25,723 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:25,723 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:25,723 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 3e658fd82f1735e6295ab1c4733e049f: 2023-07-23 22:10:25,724 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. after waiting 0 ms 2023-07-23 22:10:25,724 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:25,724 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:25,724 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 73c7538ece06421485a9cd39e3d07ba6: 2023-07-23 22:10:25,724 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5c5c86f52b26506da3abb0087a43dd51, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:25,727 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 77e95de30d7232e53eee759695b6f629, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:25,729 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:25,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing ada1271a9b28f1f411b60809ea5570d6, disabling compactions & flushes 2023-07-23 22:10:25,730 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:25,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:25,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. after waiting 0 ms 2023-07-23 22:10:25,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:25,730 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:25,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for ada1271a9b28f1f411b60809ea5570d6: 2023-07-23 22:10:25,758 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:25,759 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 5c5c86f52b26506da3abb0087a43dd51, disabling compactions & flushes 2023-07-23 22:10:25,759 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:25,759 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:25,759 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. after waiting 0 ms 2023-07-23 22:10:25,759 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:25,759 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:25,759 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 5c5c86f52b26506da3abb0087a43dd51: 2023-07-23 22:10:25,760 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:25,760 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 77e95de30d7232e53eee759695b6f629, disabling compactions & flushes 2023-07-23 22:10:25,760 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:25,760 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:25,760 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. after waiting 0 ms 2023-07-23 22:10:25,760 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:25,760 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:25,760 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 77e95de30d7232e53eee759695b6f629: 2023-07-23 22:10:25,770 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:25,771 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150225771"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150225771"}]},"ts":"1690150225771"} 2023-07-23 22:10:25,772 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150225771"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150225771"}]},"ts":"1690150225771"} 2023-07-23 22:10:25,772 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150225771"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150225771"}]},"ts":"1690150225771"} 2023-07-23 22:10:25,772 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150225771"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150225771"}]},"ts":"1690150225771"} 2023-07-23 22:10:25,772 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150225771"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150225771"}]},"ts":"1690150225771"} 2023-07-23 22:10:25,819 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-23 22:10:25,821 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:25,821 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150225821"}]},"ts":"1690150225821"} 2023-07-23 22:10:25,823 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-23 22:10:25,832 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:25,833 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:25,833 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:25,833 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:25,833 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, ASSIGN}] 2023-07-23 22:10:25,836 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, ASSIGN 2023-07-23 22:10:25,836 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, ASSIGN 2023-07-23 22:10:25,838 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, ASSIGN 2023-07-23 22:10:25,838 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, ASSIGN 2023-07-23 22:10:25,839 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, ASSIGN 2023-07-23 22:10:25,839 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:25,839 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:25,839 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:25,841 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:25,843 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:25,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 22:10:25,990 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 22:10:25,993 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=77e95de30d7232e53eee759695b6f629, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:25,993 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=3e658fd82f1735e6295ab1c4733e049f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:25,993 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=5c5c86f52b26506da3abb0087a43dd51, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:25,993 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=73c7538ece06421485a9cd39e3d07ba6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:25,994 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150225993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150225993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150225993"}]},"ts":"1690150225993"} 2023-07-23 22:10:25,994 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150225993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150225993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150225993"}]},"ts":"1690150225993"} 2023-07-23 22:10:25,993 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=ada1271a9b28f1f411b60809ea5570d6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:25,994 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150225993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150225993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150225993"}]},"ts":"1690150225993"} 2023-07-23 22:10:25,994 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150225993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150225993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150225993"}]},"ts":"1690150225993"} 2023-07-23 22:10:25,994 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150225993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150225993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150225993"}]},"ts":"1690150225993"} 2023-07-23 22:10:25,997 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 77e95de30d7232e53eee759695b6f629, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:25,999 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=16, state=RUNNABLE; OpenRegionProcedure 5c5c86f52b26506da3abb0087a43dd51, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:26,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=14, state=RUNNABLE; OpenRegionProcedure 73c7538ece06421485a9cd39e3d07ba6, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:26,006 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=13, state=RUNNABLE; OpenRegionProcedure 3e658fd82f1735e6295ab1c4733e049f, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:26,007 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=15, state=RUNNABLE; OpenRegionProcedure ada1271a9b28f1f411b60809ea5570d6, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:26,160 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:26,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 77e95de30d7232e53eee759695b6f629, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 22:10:26,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:26,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:26,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:26,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:26,162 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:26,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c5c86f52b26506da3abb0087a43dd51, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 22:10:26,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:26,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:26,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:26,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:26,167 INFO [StoreOpener-77e95de30d7232e53eee759695b6f629-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:26,168 INFO [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:26,169 DEBUG [StoreOpener-77e95de30d7232e53eee759695b6f629-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/f 2023-07-23 22:10:26,169 DEBUG [StoreOpener-77e95de30d7232e53eee759695b6f629-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/f 2023-07-23 22:10:26,169 INFO [StoreOpener-77e95de30d7232e53eee759695b6f629-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77e95de30d7232e53eee759695b6f629 columnFamilyName f 2023-07-23 22:10:26,170 INFO [StoreOpener-77e95de30d7232e53eee759695b6f629-1] regionserver.HStore(310): Store=77e95de30d7232e53eee759695b6f629/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:26,171 DEBUG [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/f 2023-07-23 22:10:26,172 DEBUG [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/f 2023-07-23 22:10:26,172 INFO [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c5c86f52b26506da3abb0087a43dd51 columnFamilyName f 2023-07-23 22:10:26,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:26,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:26,173 INFO [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] regionserver.HStore(310): Store=5c5c86f52b26506da3abb0087a43dd51/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:26,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:26,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 22:10:26,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:26,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:26,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:26,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:26,190 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 77e95de30d7232e53eee759695b6f629; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9888508000, jitterRate=-0.07906092703342438}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:26,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 77e95de30d7232e53eee759695b6f629: 2023-07-23 22:10:26,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629., pid=18, masterSystemTime=1690150226154 2023-07-23 22:10:26,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:26,194 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5c5c86f52b26506da3abb0087a43dd51; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9627580480, jitterRate=-0.10336169600486755}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:26,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5c5c86f52b26506da3abb0087a43dd51: 2023-07-23 22:10:26,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:26,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:26,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:26,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ada1271a9b28f1f411b60809ea5570d6, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 22:10:26,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51., pid=19, masterSystemTime=1690150226156 2023-07-23 22:10:26,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:26,196 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=77e95de30d7232e53eee759695b6f629, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:26,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:26,197 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150226196"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150226196"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150226196"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150226196"}]},"ts":"1690150226196"} 2023-07-23 22:10:26,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:26,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:26,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:26,201 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:26,201 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:26,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 73c7538ece06421485a9cd39e3d07ba6, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 22:10:26,202 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=5c5c86f52b26506da3abb0087a43dd51, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:26,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:26,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:26,202 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150226202"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150226202"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150226202"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150226202"}]},"ts":"1690150226202"} 2023-07-23 22:10:26,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:26,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:26,204 INFO [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:26,208 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-23 22:10:26,208 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 77e95de30d7232e53eee759695b6f629, server=jenkins-hbase4.apache.org,46085,1690150220016 in 204 msec 2023-07-23 22:10:26,209 DEBUG [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/f 2023-07-23 22:10:26,209 DEBUG [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/f 2023-07-23 22:10:26,209 INFO [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ada1271a9b28f1f411b60809ea5570d6 columnFamilyName f 2023-07-23 22:10:26,210 INFO [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] regionserver.HStore(310): Store=ada1271a9b28f1f411b60809ea5570d6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:26,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:26,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=16 2023-07-23 22:10:26,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=16, state=SUCCESS; OpenRegionProcedure 5c5c86f52b26506da3abb0087a43dd51, server=jenkins-hbase4.apache.org,41457,1690150220404 in 208 msec 2023-07-23 22:10:26,214 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, ASSIGN in 375 msec 2023-07-23 22:10:26,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:26,215 INFO [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:26,218 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, ASSIGN in 380 msec 2023-07-23 22:10:26,219 DEBUG [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/f 2023-07-23 22:10:26,219 DEBUG [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/f 2023-07-23 22:10:26,220 INFO [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 73c7538ece06421485a9cd39e3d07ba6 columnFamilyName f 2023-07-23 22:10:26,221 INFO [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] regionserver.HStore(310): Store=73c7538ece06421485a9cd39e3d07ba6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:26,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:26,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:26,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:26,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:26,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:26,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:26,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ada1271a9b28f1f411b60809ea5570d6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11620357920, jitterRate=0.08223016560077667}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:26,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 73c7538ece06421485a9cd39e3d07ba6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11227198400, jitterRate=0.0456143319606781}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:26,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ada1271a9b28f1f411b60809ea5570d6: 2023-07-23 22:10:26,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 73c7538ece06421485a9cd39e3d07ba6: 2023-07-23 22:10:26,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6., pid=22, masterSystemTime=1690150226154 2023-07-23 22:10:26,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6., pid=20, masterSystemTime=1690150226156 2023-07-23 22:10:26,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:26,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:26,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:26,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3e658fd82f1735e6295ab1c4733e049f, NAME => 'Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 22:10:26,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:26,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:26,237 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=ada1271a9b28f1f411b60809ea5570d6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:26,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:26,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:26,237 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150226237"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150226237"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150226237"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150226237"}]},"ts":"1690150226237"} 2023-07-23 22:10:26,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:26,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:26,238 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=73c7538ece06421485a9cd39e3d07ba6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:26,240 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150226238"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150226238"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150226238"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150226238"}]},"ts":"1690150226238"} 2023-07-23 22:10:26,244 INFO [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:26,247 DEBUG [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/f 2023-07-23 22:10:26,247 DEBUG [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/f 2023-07-23 22:10:26,248 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=15 2023-07-23 22:10:26,249 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=15, state=SUCCESS; OpenRegionProcedure ada1271a9b28f1f411b60809ea5570d6, server=jenkins-hbase4.apache.org,46085,1690150220016 in 235 msec 2023-07-23 22:10:26,248 INFO [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3e658fd82f1735e6295ab1c4733e049f columnFamilyName f 2023-07-23 22:10:26,248 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=14 2023-07-23 22:10:26,250 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=14, state=SUCCESS; OpenRegionProcedure 73c7538ece06421485a9cd39e3d07ba6, server=jenkins-hbase4.apache.org,41457,1690150220404 in 240 msec 2023-07-23 22:10:26,251 INFO [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] regionserver.HStore(310): Store=3e658fd82f1735e6295ab1c4733e049f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:26,252 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, ASSIGN in 415 msec 2023-07-23 22:10:26,252 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, ASSIGN in 415 msec 2023-07-23 22:10:26,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:26,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:26,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:26,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:26,266 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3e658fd82f1735e6295ab1c4733e049f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10326297600, jitterRate=-0.03828859329223633}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:26,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3e658fd82f1735e6295ab1c4733e049f: 2023-07-23 22:10:26,267 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f., pid=21, masterSystemTime=1690150226154 2023-07-23 22:10:26,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:26,271 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=3e658fd82f1735e6295ab1c4733e049f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:26,271 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:26,271 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150226271"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150226271"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150226271"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150226271"}]},"ts":"1690150226271"} 2023-07-23 22:10:26,277 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=13 2023-07-23 22:10:26,277 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=13, state=SUCCESS; OpenRegionProcedure 3e658fd82f1735e6295ab1c4733e049f, server=jenkins-hbase4.apache.org,46085,1690150220016 in 268 msec 2023-07-23 22:10:26,280 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-23 22:10:26,281 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, ASSIGN in 444 msec 2023-07-23 22:10:26,282 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:26,282 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150226282"}]},"ts":"1690150226282"} 2023-07-23 22:10:26,284 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-23 22:10:26,287 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:26,292 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 762 msec 2023-07-23 22:10:26,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 22:10:26,689 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-23 22:10:26,689 DEBUG [Listener at localhost/42675] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-23 22:10:26,690 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:26,697 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-23 22:10:26,698 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:26,698 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-23 22:10:26,699 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:26,713 DEBUG [Listener at localhost/42675] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:26,719 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44660, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:26,723 DEBUG [Listener at localhost/42675] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:26,727 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59054, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:26,728 DEBUG [Listener at localhost/42675] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:26,731 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38210, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:26,733 DEBUG [Listener at localhost/42675] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:26,736 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47432, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:26,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:26,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:26,749 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:26,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:26,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:26,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:26,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:26,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:26,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:26,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region 3e658fd82f1735e6295ab1c4733e049f to RSGroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:26,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:26,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:26,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:26,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:26,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:26,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, REOPEN/MOVE 2023-07-23 22:10:26,772 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, REOPEN/MOVE 2023-07-23 22:10:26,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region 73c7538ece06421485a9cd39e3d07ba6 to RSGroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:26,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:26,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:26,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:26,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:26,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:26,773 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=3e658fd82f1735e6295ab1c4733e049f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:26,773 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150226773"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150226773"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150226773"}]},"ts":"1690150226773"} 2023-07-23 22:10:26,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, REOPEN/MOVE 2023-07-23 22:10:26,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region ada1271a9b28f1f411b60809ea5570d6 to RSGroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:26,775 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, REOPEN/MOVE 2023-07-23 22:10:26,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:26,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:26,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:26,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:26,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:26,777 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=73c7538ece06421485a9cd39e3d07ba6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:26,777 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150226776"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150226776"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150226776"}]},"ts":"1690150226776"} 2023-07-23 22:10:26,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, REOPEN/MOVE 2023-07-23 22:10:26,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region 5c5c86f52b26506da3abb0087a43dd51 to RSGroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:26,778 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, REOPEN/MOVE 2023-07-23 22:10:26,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:26,778 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; CloseRegionProcedure 3e658fd82f1735e6295ab1c4733e049f, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:26,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:26,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:26,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:26,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:26,779 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ada1271a9b28f1f411b60809ea5570d6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:26,779 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150226779"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150226779"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150226779"}]},"ts":"1690150226779"} 2023-07-23 22:10:26,780 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=24, state=RUNNABLE; CloseRegionProcedure 73c7538ece06421485a9cd39e3d07ba6, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:26,781 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=25, state=RUNNABLE; CloseRegionProcedure ada1271a9b28f1f411b60809ea5570d6, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:26,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, REOPEN/MOVE 2023-07-23 22:10:26,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region 77e95de30d7232e53eee759695b6f629 to RSGroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:26,785 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, REOPEN/MOVE 2023-07-23 22:10:26,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:26,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:26,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:26,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:26,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:26,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, REOPEN/MOVE 2023-07-23 22:10:26,787 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=5c5c86f52b26506da3abb0087a43dd51, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:26,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_627083458, current retry=0 2023-07-23 22:10:26,788 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150226786"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150226786"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150226786"}]},"ts":"1690150226786"} 2023-07-23 22:10:26,788 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, REOPEN/MOVE 2023-07-23 22:10:26,790 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=77e95de30d7232e53eee759695b6f629, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:26,790 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150226790"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150226790"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150226790"}]},"ts":"1690150226790"} 2023-07-23 22:10:26,792 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 5c5c86f52b26506da3abb0087a43dd51, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:26,793 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=30, state=RUNNABLE; CloseRegionProcedure 77e95de30d7232e53eee759695b6f629, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:26,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:26,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5c5c86f52b26506da3abb0087a43dd51, disabling compactions & flushes 2023-07-23 22:10:26,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:26,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:26,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:26,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. after waiting 0 ms 2023-07-23 22:10:26,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:26,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3e658fd82f1735e6295ab1c4733e049f, disabling compactions & flushes 2023-07-23 22:10:26,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:26,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:26,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. after waiting 0 ms 2023-07-23 22:10:26,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:26,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:26,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:26,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5c5c86f52b26506da3abb0087a43dd51: 2023-07-23 22:10:26,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5c5c86f52b26506da3abb0087a43dd51 move to jenkins-hbase4.apache.org,39885,1690150225039 record at close sequenceid=2 2023-07-23 22:10:26,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:26,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:26,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 73c7538ece06421485a9cd39e3d07ba6, disabling compactions & flushes 2023-07-23 22:10:26,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:26,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:26,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. after waiting 0 ms 2023-07-23 22:10:26,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:26,967 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=5c5c86f52b26506da3abb0087a43dd51, regionState=CLOSED 2023-07-23 22:10:26,967 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150226967"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150226967"}]},"ts":"1690150226967"} 2023-07-23 22:10:26,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:26,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:26,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3e658fd82f1735e6295ab1c4733e049f: 2023-07-23 22:10:26,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 3e658fd82f1735e6295ab1c4733e049f move to jenkins-hbase4.apache.org,34191,1690150220233 record at close sequenceid=2 2023-07-23 22:10:26,976 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-23 22:10:26,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 5c5c86f52b26506da3abb0087a43dd51, server=jenkins-hbase4.apache.org,41457,1690150220404 in 178 msec 2023-07-23 22:10:26,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:26,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:26,978 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39885,1690150225039; forceNewPlan=false, retain=false 2023-07-23 22:10:26,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 77e95de30d7232e53eee759695b6f629, disabling compactions & flushes 2023-07-23 22:10:26,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:26,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:26,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. after waiting 0 ms 2023-07-23 22:10:26,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:26,985 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=3e658fd82f1735e6295ab1c4733e049f, regionState=CLOSED 2023-07-23 22:10:26,985 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150226985"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150226985"}]},"ts":"1690150226985"} 2023-07-23 22:10:26,986 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:26,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:26,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 73c7538ece06421485a9cd39e3d07ba6: 2023-07-23 22:10:26,988 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 73c7538ece06421485a9cd39e3d07ba6 move to jenkins-hbase4.apache.org,39885,1690150225039 record at close sequenceid=2 2023-07-23 22:10:26,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:26,992 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=73c7538ece06421485a9cd39e3d07ba6, regionState=CLOSED 2023-07-23 22:10:26,992 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150226992"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150226992"}]},"ts":"1690150226992"} 2023-07-23 22:10:26,993 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-23 22:10:26,993 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; CloseRegionProcedure 3e658fd82f1735e6295ab1c4733e049f, server=jenkins-hbase4.apache.org,46085,1690150220016 in 210 msec 2023-07-23 22:10:26,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:26,995 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34191,1690150220233; forceNewPlan=false, retain=false 2023-07-23 22:10:26,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:26,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 77e95de30d7232e53eee759695b6f629: 2023-07-23 22:10:26,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 77e95de30d7232e53eee759695b6f629 move to jenkins-hbase4.apache.org,39885,1690150225039 record at close sequenceid=2 2023-07-23 22:10:26,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:26,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:27,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ada1271a9b28f1f411b60809ea5570d6, disabling compactions & flushes 2023-07-23 22:10:27,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:27,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:27,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. after waiting 0 ms 2023-07-23 22:10:27,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:27,002 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=77e95de30d7232e53eee759695b6f629, regionState=CLOSED 2023-07-23 22:10:27,003 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150227002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150227002"}]},"ts":"1690150227002"} 2023-07-23 22:10:27,003 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=24 2023-07-23 22:10:27,003 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; CloseRegionProcedure 73c7538ece06421485a9cd39e3d07ba6, server=jenkins-hbase4.apache.org,41457,1690150220404 in 215 msec 2023-07-23 22:10:27,005 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39885,1690150225039; forceNewPlan=false, retain=false 2023-07-23 22:10:27,015 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=30 2023-07-23 22:10:27,015 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=30, state=SUCCESS; CloseRegionProcedure 77e95de30d7232e53eee759695b6f629, server=jenkins-hbase4.apache.org,46085,1690150220016 in 213 msec 2023-07-23 22:10:27,017 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39885,1690150225039; forceNewPlan=false, retain=false 2023-07-23 22:10:27,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:27,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:27,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ada1271a9b28f1f411b60809ea5570d6: 2023-07-23 22:10:27,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ada1271a9b28f1f411b60809ea5570d6 move to jenkins-hbase4.apache.org,34191,1690150220233 record at close sequenceid=2 2023-07-23 22:10:27,033 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:27,034 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ada1271a9b28f1f411b60809ea5570d6, regionState=CLOSED 2023-07-23 22:10:27,034 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227034"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150227034"}]},"ts":"1690150227034"} 2023-07-23 22:10:27,043 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=25 2023-07-23 22:10:27,043 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=25, state=SUCCESS; CloseRegionProcedure ada1271a9b28f1f411b60809ea5570d6, server=jenkins-hbase4.apache.org,46085,1690150220016 in 259 msec 2023-07-23 22:10:27,046 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34191,1690150220233; forceNewPlan=false, retain=false 2023-07-23 22:10:27,128 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 22:10:27,128 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ada1271a9b28f1f411b60809ea5570d6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:27,128 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=3e658fd82f1735e6295ab1c4733e049f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:27,129 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227128"}]},"ts":"1690150227128"} 2023-07-23 22:10:27,128 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=77e95de30d7232e53eee759695b6f629, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,128 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=73c7538ece06421485a9cd39e3d07ba6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,128 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=5c5c86f52b26506da3abb0087a43dd51, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,129 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150227128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227128"}]},"ts":"1690150227128"} 2023-07-23 22:10:27,129 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150227128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227128"}]},"ts":"1690150227128"} 2023-07-23 22:10:27,129 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227128"}]},"ts":"1690150227128"} 2023-07-23 22:10:27,129 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227128"}]},"ts":"1690150227128"} 2023-07-23 22:10:27,131 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=25, state=RUNNABLE; OpenRegionProcedure ada1271a9b28f1f411b60809ea5570d6, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:27,133 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; OpenRegionProcedure 77e95de30d7232e53eee759695b6f629, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:27,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=23, state=RUNNABLE; OpenRegionProcedure 3e658fd82f1735e6295ab1c4733e049f, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:27,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=27, state=RUNNABLE; OpenRegionProcedure 5c5c86f52b26506da3abb0087a43dd51, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:27,144 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=24, state=RUNNABLE; OpenRegionProcedure 73c7538ece06421485a9cd39e3d07ba6, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:27,284 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:27,284 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:27,288 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44668, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:27,289 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,289 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:27,291 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59066, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:27,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:27,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3e658fd82f1735e6295ab1c4733e049f, NAME => 'Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 22:10:27,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:27,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:27,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:27,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:27,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:27,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 73c7538ece06421485a9cd39e3d07ba6, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 22:10:27,295 INFO [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:27,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:27,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:27,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:27,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:27,297 DEBUG [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/f 2023-07-23 22:10:27,297 DEBUG [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/f 2023-07-23 22:10:27,297 INFO [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3e658fd82f1735e6295ab1c4733e049f columnFamilyName f 2023-07-23 22:10:27,297 INFO [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:27,298 INFO [StoreOpener-3e658fd82f1735e6295ab1c4733e049f-1] regionserver.HStore(310): Store=3e658fd82f1735e6295ab1c4733e049f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:27,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:27,301 DEBUG [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/f 2023-07-23 22:10:27,301 DEBUG [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/f 2023-07-23 22:10:27,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:27,303 INFO [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 73c7538ece06421485a9cd39e3d07ba6 columnFamilyName f 2023-07-23 22:10:27,304 INFO [StoreOpener-73c7538ece06421485a9cd39e3d07ba6-1] regionserver.HStore(310): Store=73c7538ece06421485a9cd39e3d07ba6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:27,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:27,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:27,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:27,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:27,310 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3e658fd82f1735e6295ab1c4733e049f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10076152320, jitterRate=-0.061585187911987305}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:27,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3e658fd82f1735e6295ab1c4733e049f: 2023-07-23 22:10:27,311 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f., pid=35, masterSystemTime=1690150227284 2023-07-23 22:10:27,313 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 73c7538ece06421485a9cd39e3d07ba6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9411995680, jitterRate=-0.12343959510326385}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:27,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 73c7538ece06421485a9cd39e3d07ba6: 2023-07-23 22:10:27,314 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6., pid=37, masterSystemTime=1690150227289 2023-07-23 22:10:27,316 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:27,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:27,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:27,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ada1271a9b28f1f411b60809ea5570d6, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 22:10:27,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:27,318 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=3e658fd82f1735e6295ab1c4733e049f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:27,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:27,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:27,318 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150227318"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150227318"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150227318"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150227318"}]},"ts":"1690150227318"} 2023-07-23 22:10:27,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:27,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:27,319 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:27,319 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:27,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 77e95de30d7232e53eee759695b6f629, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 22:10:27,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:27,319 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=73c7538ece06421485a9cd39e3d07ba6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:27,320 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227319"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150227319"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150227319"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150227319"}]},"ts":"1690150227319"} 2023-07-23 22:10:27,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:27,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:27,321 INFO [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:27,321 INFO [StoreOpener-77e95de30d7232e53eee759695b6f629-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:27,322 DEBUG [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/f 2023-07-23 22:10:27,322 DEBUG [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/f 2023-07-23 22:10:27,323 INFO [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ada1271a9b28f1f411b60809ea5570d6 columnFamilyName f 2023-07-23 22:10:27,323 DEBUG [StoreOpener-77e95de30d7232e53eee759695b6f629-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/f 2023-07-23 22:10:27,324 DEBUG [StoreOpener-77e95de30d7232e53eee759695b6f629-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/f 2023-07-23 22:10:27,323 INFO [StoreOpener-ada1271a9b28f1f411b60809ea5570d6-1] regionserver.HStore(310): Store=ada1271a9b28f1f411b60809ea5570d6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:27,325 INFO [StoreOpener-77e95de30d7232e53eee759695b6f629-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 77e95de30d7232e53eee759695b6f629 columnFamilyName f 2023-07-23 22:10:27,325 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=23 2023-07-23 22:10:27,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:27,325 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=23, state=SUCCESS; OpenRegionProcedure 3e658fd82f1735e6295ab1c4733e049f, server=jenkins-hbase4.apache.org,34191,1690150220233 in 186 msec 2023-07-23 22:10:27,326 INFO [StoreOpener-77e95de30d7232e53eee759695b6f629-1] regionserver.HStore(310): Store=77e95de30d7232e53eee759695b6f629/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:27,326 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=24 2023-07-23 22:10:27,326 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=24, state=SUCCESS; OpenRegionProcedure 73c7538ece06421485a9cd39e3d07ba6, server=jenkins-hbase4.apache.org,39885,1690150225039 in 179 msec 2023-07-23 22:10:27,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:27,329 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, REOPEN/MOVE in 556 msec 2023-07-23 22:10:27,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:27,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:27,329 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, REOPEN/MOVE in 553 msec 2023-07-23 22:10:27,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:27,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:27,335 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ada1271a9b28f1f411b60809ea5570d6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10715907680, jitterRate=-0.0020033270120620728}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:27,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 77e95de30d7232e53eee759695b6f629; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11993483520, jitterRate=0.11698019504547119}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:27,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ada1271a9b28f1f411b60809ea5570d6: 2023-07-23 22:10:27,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 77e95de30d7232e53eee759695b6f629: 2023-07-23 22:10:27,337 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6., pid=33, masterSystemTime=1690150227284 2023-07-23 22:10:27,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629., pid=34, masterSystemTime=1690150227289 2023-07-23 22:10:27,342 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ada1271a9b28f1f411b60809ea5570d6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:27,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:27,342 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:27,342 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227342"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150227342"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150227342"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150227342"}]},"ts":"1690150227342"} 2023-07-23 22:10:27,343 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:27,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c5c86f52b26506da3abb0087a43dd51, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 22:10:27,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:27,343 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=77e95de30d7232e53eee759695b6f629, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:27,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:27,344 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150227343"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150227343"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150227343"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150227343"}]},"ts":"1690150227343"} 2023-07-23 22:10:27,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:27,347 INFO [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:27,349 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=25 2023-07-23 22:10:27,349 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=25, state=SUCCESS; OpenRegionProcedure ada1271a9b28f1f411b60809ea5570d6, server=jenkins-hbase4.apache.org,34191,1690150220233 in 214 msec 2023-07-23 22:10:27,351 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, REOPEN/MOVE in 574 msec 2023-07-23 22:10:27,351 DEBUG [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/f 2023-07-23 22:10:27,351 DEBUG [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/f 2023-07-23 22:10:27,352 INFO [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c5c86f52b26506da3abb0087a43dd51 columnFamilyName f 2023-07-23 22:10:27,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:27,353 INFO [StoreOpener-5c5c86f52b26506da3abb0087a43dd51-1] regionserver.HStore(310): Store=5c5c86f52b26506da3abb0087a43dd51/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:27,353 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:27,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:27,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-23 22:10:27,356 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; OpenRegionProcedure 77e95de30d7232e53eee759695b6f629, server=jenkins-hbase4.apache.org,39885,1690150225039 in 220 msec 2023-07-23 22:10:27,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:27,357 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, REOPEN/MOVE in 571 msec 2023-07-23 22:10:27,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:27,362 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5c5c86f52b26506da3abb0087a43dd51; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11622916320, jitterRate=0.0824684351682663}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:27,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5c5c86f52b26506da3abb0087a43dd51: 2023-07-23 22:10:27,363 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51., pid=36, masterSystemTime=1690150227289 2023-07-23 22:10:27,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:27,365 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:27,365 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=5c5c86f52b26506da3abb0087a43dd51, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,366 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227365"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150227365"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150227365"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150227365"}]},"ts":"1690150227365"} 2023-07-23 22:10:27,370 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=27 2023-07-23 22:10:27,370 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=27, state=SUCCESS; OpenRegionProcedure 5c5c86f52b26506da3abb0087a43dd51, server=jenkins-hbase4.apache.org,39885,1690150225039 in 228 msec 2023-07-23 22:10:27,371 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, REOPEN/MOVE in 592 msec 2023-07-23 22:10:27,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-23 22:10:27,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_627083458. 2023-07-23 22:10:27,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:27,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:27,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:27,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:27,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:27,800 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:27,806 INFO [Listener at localhost/42675] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:27,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:27,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:27,826 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150227826"}]},"ts":"1690150227826"} 2023-07-23 22:10:27,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-23 22:10:27,831 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-23 22:10:27,833 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-23 22:10:27,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, UNASSIGN}] 2023-07-23 22:10:27,836 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, UNASSIGN 2023-07-23 22:10:27,838 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, UNASSIGN 2023-07-23 22:10:27,838 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, UNASSIGN 2023-07-23 22:10:27,838 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, UNASSIGN 2023-07-23 22:10:27,839 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, UNASSIGN 2023-07-23 22:10:27,840 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=77e95de30d7232e53eee759695b6f629, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,840 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=3e658fd82f1735e6295ab1c4733e049f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:27,840 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=5c5c86f52b26506da3abb0087a43dd51, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,841 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150227840"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227840"}]},"ts":"1690150227840"} 2023-07-23 22:10:27,841 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=ada1271a9b28f1f411b60809ea5570d6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:27,841 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227840"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227840"}]},"ts":"1690150227840"} 2023-07-23 22:10:27,841 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=73c7538ece06421485a9cd39e3d07ba6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:27,841 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150227840"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227840"}]},"ts":"1690150227840"} 2023-07-23 22:10:27,841 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227841"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227841"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227841"}]},"ts":"1690150227841"} 2023-07-23 22:10:27,841 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150227841"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150227841"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150227841"}]},"ts":"1690150227841"} 2023-07-23 22:10:27,843 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=43, state=RUNNABLE; CloseRegionProcedure 77e95de30d7232e53eee759695b6f629, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:27,844 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=42, state=RUNNABLE; CloseRegionProcedure 5c5c86f52b26506da3abb0087a43dd51, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:27,845 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=39, state=RUNNABLE; CloseRegionProcedure 3e658fd82f1735e6295ab1c4733e049f, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:27,846 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=40, state=RUNNABLE; CloseRegionProcedure 73c7538ece06421485a9cd39e3d07ba6, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:27,848 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=41, state=RUNNABLE; CloseRegionProcedure ada1271a9b28f1f411b60809ea5570d6, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:27,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-23 22:10:27,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:27,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 77e95de30d7232e53eee759695b6f629, disabling compactions & flushes 2023-07-23 22:10:27,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:27,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:27,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. after waiting 0 ms 2023-07-23 22:10:27,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:27,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:28,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ada1271a9b28f1f411b60809ea5570d6, disabling compactions & flushes 2023-07-23 22:10:28,001 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:28,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:28,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. after waiting 0 ms 2023-07-23 22:10:28,001 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:28,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:28,015 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:28,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629. 2023-07-23 22:10:28,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 77e95de30d7232e53eee759695b6f629: 2023-07-23 22:10:28,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6. 2023-07-23 22:10:28,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ada1271a9b28f1f411b60809ea5570d6: 2023-07-23 22:10:28,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:28,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:28,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 73c7538ece06421485a9cd39e3d07ba6, disabling compactions & flushes 2023-07-23 22:10:28,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:28,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:28,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. after waiting 0 ms 2023-07-23 22:10:28,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:28,020 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=77e95de30d7232e53eee759695b6f629, regionState=CLOSED 2023-07-23 22:10:28,020 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150228020"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228020"}]},"ts":"1690150228020"} 2023-07-23 22:10:28,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:28,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:28,021 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3e658fd82f1735e6295ab1c4733e049f, disabling compactions & flushes 2023-07-23 22:10:28,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:28,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:28,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. after waiting 0 ms 2023-07-23 22:10:28,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:28,022 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=ada1271a9b28f1f411b60809ea5570d6, regionState=CLOSED 2023-07-23 22:10:28,022 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228022"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228022"}]},"ts":"1690150228022"} 2023-07-23 22:10:28,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:28,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f. 2023-07-23 22:10:28,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3e658fd82f1735e6295ab1c4733e049f: 2023-07-23 22:10:28,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:28,035 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=43 2023-07-23 22:10:28,035 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=43, state=SUCCESS; CloseRegionProcedure 77e95de30d7232e53eee759695b6f629, server=jenkins-hbase4.apache.org,39885,1690150225039 in 180 msec 2023-07-23 22:10:28,038 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6. 2023-07-23 22:10:28,039 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 73c7538ece06421485a9cd39e3d07ba6: 2023-07-23 22:10:28,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:28,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=41 2023-07-23 22:10:28,040 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=41, state=SUCCESS; CloseRegionProcedure ada1271a9b28f1f411b60809ea5570d6, server=jenkins-hbase4.apache.org,34191,1690150220233 in 185 msec 2023-07-23 22:10:28,040 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=3e658fd82f1735e6295ab1c4733e049f, regionState=CLOSED 2023-07-23 22:10:28,041 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150228040"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228040"}]},"ts":"1690150228040"} 2023-07-23 22:10:28,041 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=77e95de30d7232e53eee759695b6f629, UNASSIGN in 200 msec 2023-07-23 22:10:28,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:28,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:28,043 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=73c7538ece06421485a9cd39e3d07ba6, regionState=CLOSED 2023-07-23 22:10:28,043 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228043"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228043"}]},"ts":"1690150228043"} 2023-07-23 22:10:28,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5c5c86f52b26506da3abb0087a43dd51, disabling compactions & flushes 2023-07-23 22:10:28,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:28,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:28,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. after waiting 0 ms 2023-07-23 22:10:28,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:28,049 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ada1271a9b28f1f411b60809ea5570d6, UNASSIGN in 206 msec 2023-07-23 22:10:28,055 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=39 2023-07-23 22:10:28,055 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=39, state=SUCCESS; CloseRegionProcedure 3e658fd82f1735e6295ab1c4733e049f, server=jenkins-hbase4.apache.org,34191,1690150220233 in 203 msec 2023-07-23 22:10:28,055 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=40 2023-07-23 22:10:28,055 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=40, state=SUCCESS; CloseRegionProcedure 73c7538ece06421485a9cd39e3d07ba6, server=jenkins-hbase4.apache.org,39885,1690150225039 in 204 msec 2023-07-23 22:10:28,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:28,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51. 2023-07-23 22:10:28,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5c5c86f52b26506da3abb0087a43dd51: 2023-07-23 22:10:28,057 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3e658fd82f1735e6295ab1c4733e049f, UNASSIGN in 220 msec 2023-07-23 22:10:28,058 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=73c7538ece06421485a9cd39e3d07ba6, UNASSIGN in 220 msec 2023-07-23 22:10:28,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:28,059 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=5c5c86f52b26506da3abb0087a43dd51, regionState=CLOSED 2023-07-23 22:10:28,059 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228059"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228059"}]},"ts":"1690150228059"} 2023-07-23 22:10:28,063 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=42 2023-07-23 22:10:28,063 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=42, state=SUCCESS; CloseRegionProcedure 5c5c86f52b26506da3abb0087a43dd51, server=jenkins-hbase4.apache.org,39885,1690150225039 in 217 msec 2023-07-23 22:10:28,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=38 2023-07-23 22:10:28,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5c86f52b26506da3abb0087a43dd51, UNASSIGN in 228 msec 2023-07-23 22:10:28,074 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150228074"}]},"ts":"1690150228074"} 2023-07-23 22:10:28,079 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-23 22:10:28,081 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-23 22:10:28,084 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 269 msec 2023-07-23 22:10:28,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-23 22:10:28,132 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-23 22:10:28,133 INFO [Listener at localhost/42675] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:28,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:28,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-23 22:10:28,149 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-23 22:10:28,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 22:10:28,163 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:28,163 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:28,163 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:28,163 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:28,164 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:28,167 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/recovered.edits] 2023-07-23 22:10:28,169 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/recovered.edits] 2023-07-23 22:10:28,170 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/recovered.edits] 2023-07-23 22:10:28,170 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/recovered.edits] 2023-07-23 22:10:28,170 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/recovered.edits] 2023-07-23 22:10:28,191 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/recovered.edits/7.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f/recovered.edits/7.seqid 2023-07-23 22:10:28,191 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/recovered.edits/7.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6/recovered.edits/7.seqid 2023-07-23 22:10:28,192 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/recovered.edits/7.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6/recovered.edits/7.seqid 2023-07-23 22:10:28,191 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/recovered.edits/7.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51/recovered.edits/7.seqid 2023-07-23 22:10:28,193 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ada1271a9b28f1f411b60809ea5570d6 2023-07-23 22:10:28,193 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3e658fd82f1735e6295ab1c4733e049f 2023-07-23 22:10:28,195 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/recovered.edits/7.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629/recovered.edits/7.seqid 2023-07-23 22:10:28,195 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/73c7538ece06421485a9cd39e3d07ba6 2023-07-23 22:10:28,195 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5c86f52b26506da3abb0087a43dd51 2023-07-23 22:10:28,197 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/77e95de30d7232e53eee759695b6f629 2023-07-23 22:10:28,197 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 22:10:28,227 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-23 22:10:28,231 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-23 22:10:28,232 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-23 22:10:28,232 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150228232"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:28,233 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150228232"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:28,233 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150228232"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:28,233 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150228232"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:28,233 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150228232"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:28,236 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-23 22:10:28,236 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3e658fd82f1735e6295ab1c4733e049f, NAME => 'Group_testTableMoveTruncateAndDrop,,1690150225519.3e658fd82f1735e6295ab1c4733e049f.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 73c7538ece06421485a9cd39e3d07ba6, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690150225519.73c7538ece06421485a9cd39e3d07ba6.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => ada1271a9b28f1f411b60809ea5570d6, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150225519.ada1271a9b28f1f411b60809ea5570d6.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 5c5c86f52b26506da3abb0087a43dd51, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150225519.5c5c86f52b26506da3abb0087a43dd51.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 77e95de30d7232e53eee759695b6f629, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690150225519.77e95de30d7232e53eee759695b6f629.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-23 22:10:28,236 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-23 22:10:28,236 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690150228236"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:28,238 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-23 22:10:28,246 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:28,246 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:28,246 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:28,246 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:28,247 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:28,247 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0 empty. 2023-07-23 22:10:28,247 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9 empty. 2023-07-23 22:10:28,247 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802 empty. 2023-07-23 22:10:28,248 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9 empty. 2023-07-23 22:10:28,248 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:28,248 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278 empty. 2023-07-23 22:10:28,248 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:28,248 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:28,248 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:28,248 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:28,248 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 22:10:28,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 22:10:28,273 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:28,274 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => ab6b2037ec108fdd106a9e330ec55fa0, NAME => 'Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:28,275 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 7c6f583fe6042537278d3be8330817a9, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:28,275 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 49ac0e5466f1afaa41c8ed4c59aa54a9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:28,281 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 22:10:28,282 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-23 22:10:28,282 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:28,282 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-23 22:10:28,282 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 22:10:28,282 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-23 22:10:28,328 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,331 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing ab6b2037ec108fdd106a9e330ec55fa0, disabling compactions & flushes 2023-07-23 22:10:28,331 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:28,331 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:28,331 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. after waiting 0 ms 2023-07-23 22:10:28,331 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:28,331 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:28,331 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for ab6b2037ec108fdd106a9e330ec55fa0: 2023-07-23 22:10:28,332 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 95257a81280ed1e6e3867f082bd48802, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:28,341 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,341 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 49ac0e5466f1afaa41c8ed4c59aa54a9, disabling compactions & flushes 2023-07-23 22:10:28,341 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:28,341 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:28,342 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. after waiting 0 ms 2023-07-23 22:10:28,342 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:28,342 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:28,342 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 49ac0e5466f1afaa41c8ed4c59aa54a9: 2023-07-23 22:10:28,342 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0b7053f21b7f57c08b1dd55e5891c278, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:28,349 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,349 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 7c6f583fe6042537278d3be8330817a9, disabling compactions & flushes 2023-07-23 22:10:28,349 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:28,349 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:28,349 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. after waiting 0 ms 2023-07-23 22:10:28,349 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:28,349 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:28,349 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 7c6f583fe6042537278d3be8330817a9: 2023-07-23 22:10:28,355 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 22:10:28,369 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 22:10:28,370 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 22:10:28,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 95257a81280ed1e6e3867f082bd48802, disabling compactions & flushes 2023-07-23 22:10:28,380 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:28,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:28,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. after waiting 0 ms 2023-07-23 22:10:28,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:28,380 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:28,380 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 95257a81280ed1e6e3867f082bd48802: 2023-07-23 22:10:28,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 0b7053f21b7f57c08b1dd55e5891c278, disabling compactions & flushes 2023-07-23 22:10:28,384 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:28,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:28,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. after waiting 0 ms 2023-07-23 22:10:28,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:28,384 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:28,384 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 0b7053f21b7f57c08b1dd55e5891c278: 2023-07-23 22:10:28,397 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150228397"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228397"}]},"ts":"1690150228397"} 2023-07-23 22:10:28,397 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228397"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228397"}]},"ts":"1690150228397"} 2023-07-23 22:10:28,397 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228397"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228397"}]},"ts":"1690150228397"} 2023-07-23 22:10:28,397 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228397"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228397"}]},"ts":"1690150228397"} 2023-07-23 22:10:28,397 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150228397"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150228397"}]},"ts":"1690150228397"} 2023-07-23 22:10:28,403 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-23 22:10:28,404 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150228404"}]},"ts":"1690150228404"} 2023-07-23 22:10:28,407 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-23 22:10:28,413 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:28,413 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:28,413 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:28,413 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:28,417 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab6b2037ec108fdd106a9e330ec55fa0, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49ac0e5466f1afaa41c8ed4c59aa54a9, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7c6f583fe6042537278d3be8330817a9, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95257a81280ed1e6e3867f082bd48802, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b7053f21b7f57c08b1dd55e5891c278, ASSIGN}] 2023-07-23 22:10:28,420 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab6b2037ec108fdd106a9e330ec55fa0, ASSIGN 2023-07-23 22:10:28,422 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49ac0e5466f1afaa41c8ed4c59aa54a9, ASSIGN 2023-07-23 22:10:28,422 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7c6f583fe6042537278d3be8330817a9, ASSIGN 2023-07-23 22:10:28,423 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95257a81280ed1e6e3867f082bd48802, ASSIGN 2023-07-23 22:10:28,423 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b7053f21b7f57c08b1dd55e5891c278, ASSIGN 2023-07-23 22:10:28,425 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab6b2037ec108fdd106a9e330ec55fa0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34191,1690150220233; forceNewPlan=false, retain=false 2023-07-23 22:10:28,427 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7c6f583fe6042537278d3be8330817a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34191,1690150220233; forceNewPlan=false, retain=false 2023-07-23 22:10:28,427 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49ac0e5466f1afaa41c8ed4c59aa54a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39885,1690150225039; forceNewPlan=false, retain=false 2023-07-23 22:10:28,427 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95257a81280ed1e6e3867f082bd48802, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39885,1690150225039; forceNewPlan=false, retain=false 2023-07-23 22:10:28,428 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b7053f21b7f57c08b1dd55e5891c278, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34191,1690150220233; forceNewPlan=false, retain=false 2023-07-23 22:10:28,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 22:10:28,576 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 22:10:28,578 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=95257a81280ed1e6e3867f082bd48802, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:28,578 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=ab6b2037ec108fdd106a9e330ec55fa0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:28,578 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=49ac0e5466f1afaa41c8ed4c59aa54a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:28,578 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=0b7053f21b7f57c08b1dd55e5891c278, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:28,579 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228578"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150228578"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150228578"}]},"ts":"1690150228578"} 2023-07-23 22:10:28,578 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=7c6f583fe6042537278d3be8330817a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:28,579 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150228578"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150228578"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150228578"}]},"ts":"1690150228578"} 2023-07-23 22:10:28,579 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228578"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150228578"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150228578"}]},"ts":"1690150228578"} 2023-07-23 22:10:28,579 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150228578"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150228578"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150228578"}]},"ts":"1690150228578"} 2023-07-23 22:10:28,579 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228578"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150228578"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150228578"}]},"ts":"1690150228578"} 2023-07-23 22:10:28,581 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=51, state=RUNNABLE; OpenRegionProcedure 49ac0e5466f1afaa41c8ed4c59aa54a9, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:28,584 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=54, state=RUNNABLE; OpenRegionProcedure 0b7053f21b7f57c08b1dd55e5891c278, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:28,586 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=52, state=RUNNABLE; OpenRegionProcedure 7c6f583fe6042537278d3be8330817a9, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:28,588 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=50, state=RUNNABLE; OpenRegionProcedure ab6b2037ec108fdd106a9e330ec55fa0, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:28,589 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=53, state=RUNNABLE; OpenRegionProcedure 95257a81280ed1e6e3867f082bd48802, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:28,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:28,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7c6f583fe6042537278d3be8330817a9, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 22:10:28,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:28,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:28,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 95257a81280ed1e6e3867f082bd48802, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 22:10:28,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:28,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:28,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:28,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:28,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:28,751 INFO [StoreOpener-7c6f583fe6042537278d3be8330817a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:28,751 INFO [StoreOpener-95257a81280ed1e6e3867f082bd48802-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:28,754 DEBUG [StoreOpener-7c6f583fe6042537278d3be8330817a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9/f 2023-07-23 22:10:28,754 DEBUG [StoreOpener-7c6f583fe6042537278d3be8330817a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9/f 2023-07-23 22:10:28,755 INFO [StoreOpener-7c6f583fe6042537278d3be8330817a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7c6f583fe6042537278d3be8330817a9 columnFamilyName f 2023-07-23 22:10:28,756 DEBUG [StoreOpener-95257a81280ed1e6e3867f082bd48802-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802/f 2023-07-23 22:10:28,756 DEBUG [StoreOpener-95257a81280ed1e6e3867f082bd48802-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802/f 2023-07-23 22:10:28,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 22:10:28,757 INFO [StoreOpener-7c6f583fe6042537278d3be8330817a9-1] regionserver.HStore(310): Store=7c6f583fe6042537278d3be8330817a9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:28,757 INFO [StoreOpener-95257a81280ed1e6e3867f082bd48802-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 95257a81280ed1e6e3867f082bd48802 columnFamilyName f 2023-07-23 22:10:28,758 INFO [StoreOpener-95257a81280ed1e6e3867f082bd48802-1] regionserver.HStore(310): Store=95257a81280ed1e6e3867f082bd48802/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:28,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:28,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:28,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:28,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:28,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:28,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:28,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:28,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:28,779 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 95257a81280ed1e6e3867f082bd48802; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11243443520, jitterRate=0.04712727665901184}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:28,779 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7c6f583fe6042537278d3be8330817a9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11250310080, jitterRate=0.04776677489280701}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:28,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 95257a81280ed1e6e3867f082bd48802: 2023-07-23 22:10:28,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7c6f583fe6042537278d3be8330817a9: 2023-07-23 22:10:28,781 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802., pid=59, masterSystemTime=1690150228739 2023-07-23 22:10:28,781 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9., pid=57, masterSystemTime=1690150228739 2023-07-23 22:10:28,784 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=95257a81280ed1e6e3867f082bd48802, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:28,784 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:28,784 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:28,785 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:28,785 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228784"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150228784"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150228784"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150228784"}]},"ts":"1690150228784"} 2023-07-23 22:10:28,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49ac0e5466f1afaa41c8ed4c59aa54a9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 22:10:28,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:28,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:28,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:28,786 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=7c6f583fe6042537278d3be8330817a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:28,786 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228786"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150228786"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150228786"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150228786"}]},"ts":"1690150228786"} 2023-07-23 22:10:28,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:28,787 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:28,787 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:28,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab6b2037ec108fdd106a9e330ec55fa0, NAME => 'Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 22:10:28,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:28,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,787 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:28,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:28,792 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=53 2023-07-23 22:10:28,792 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=53, state=SUCCESS; OpenRegionProcedure 95257a81280ed1e6e3867f082bd48802, server=jenkins-hbase4.apache.org,39885,1690150225039 in 200 msec 2023-07-23 22:10:28,795 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-23 22:10:28,795 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; OpenRegionProcedure 7c6f583fe6042537278d3be8330817a9, server=jenkins-hbase4.apache.org,34191,1690150220233 in 205 msec 2023-07-23 22:10:28,795 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95257a81280ed1e6e3867f082bd48802, ASSIGN in 376 msec 2023-07-23 22:10:28,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7c6f583fe6042537278d3be8330817a9, ASSIGN in 378 msec 2023-07-23 22:10:28,799 INFO [StoreOpener-49ac0e5466f1afaa41c8ed4c59aa54a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:28,804 INFO [StoreOpener-ab6b2037ec108fdd106a9e330ec55fa0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:28,806 DEBUG [StoreOpener-49ac0e5466f1afaa41c8ed4c59aa54a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9/f 2023-07-23 22:10:28,806 DEBUG [StoreOpener-49ac0e5466f1afaa41c8ed4c59aa54a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9/f 2023-07-23 22:10:28,806 DEBUG [StoreOpener-ab6b2037ec108fdd106a9e330ec55fa0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0/f 2023-07-23 22:10:28,806 DEBUG [StoreOpener-ab6b2037ec108fdd106a9e330ec55fa0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0/f 2023-07-23 22:10:28,806 INFO [StoreOpener-49ac0e5466f1afaa41c8ed4c59aa54a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49ac0e5466f1afaa41c8ed4c59aa54a9 columnFamilyName f 2023-07-23 22:10:28,807 INFO [StoreOpener-ab6b2037ec108fdd106a9e330ec55fa0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab6b2037ec108fdd106a9e330ec55fa0 columnFamilyName f 2023-07-23 22:10:28,807 INFO [StoreOpener-49ac0e5466f1afaa41c8ed4c59aa54a9-1] regionserver.HStore(310): Store=49ac0e5466f1afaa41c8ed4c59aa54a9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:28,808 INFO [StoreOpener-ab6b2037ec108fdd106a9e330ec55fa0-1] regionserver.HStore(310): Store=ab6b2037ec108fdd106a9e330ec55fa0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:28,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:28,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:28,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:28,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:28,813 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:28,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:28,820 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ab6b2037ec108fdd106a9e330ec55fa0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9805498080, jitterRate=-0.08679182827472687}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:28,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ab6b2037ec108fdd106a9e330ec55fa0: 2023-07-23 22:10:28,821 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0., pid=58, masterSystemTime=1690150228739 2023-07-23 22:10:28,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:28,823 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:28,823 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:28,823 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0b7053f21b7f57c08b1dd55e5891c278, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 22:10:28,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:28,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:28,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:28,824 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:28,825 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=ab6b2037ec108fdd106a9e330ec55fa0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:28,825 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150228825"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150228825"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150228825"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150228825"}]},"ts":"1690150228825"} 2023-07-23 22:10:28,825 INFO [StoreOpener-0b7053f21b7f57c08b1dd55e5891c278-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:28,828 DEBUG [StoreOpener-0b7053f21b7f57c08b1dd55e5891c278-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278/f 2023-07-23 22:10:28,828 DEBUG [StoreOpener-0b7053f21b7f57c08b1dd55e5891c278-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278/f 2023-07-23 22:10:28,828 INFO [StoreOpener-0b7053f21b7f57c08b1dd55e5891c278-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0b7053f21b7f57c08b1dd55e5891c278 columnFamilyName f 2023-07-23 22:10:28,829 INFO [StoreOpener-0b7053f21b7f57c08b1dd55e5891c278-1] regionserver.HStore(310): Store=0b7053f21b7f57c08b1dd55e5891c278/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:28,830 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:28,830 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=50 2023-07-23 22:10:28,830 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=50, state=SUCCESS; OpenRegionProcedure ab6b2037ec108fdd106a9e330ec55fa0, server=jenkins-hbase4.apache.org,34191,1690150220233 in 239 msec 2023-07-23 22:10:28,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:28,832 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab6b2037ec108fdd106a9e330ec55fa0, ASSIGN in 416 msec 2023-07-23 22:10:28,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:28,836 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:28,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:28,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:28,843 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49ac0e5466f1afaa41c8ed4c59aa54a9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10426509280, jitterRate=-0.028955653309822083}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:28,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49ac0e5466f1afaa41c8ed4c59aa54a9: 2023-07-23 22:10:28,844 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0b7053f21b7f57c08b1dd55e5891c278; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11575733440, jitterRate=0.07807418704032898}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:28,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0b7053f21b7f57c08b1dd55e5891c278: 2023-07-23 22:10:28,845 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9., pid=55, masterSystemTime=1690150228739 2023-07-23 22:10:28,845 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278., pid=56, masterSystemTime=1690150228739 2023-07-23 22:10:28,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:28,848 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=49ac0e5466f1afaa41c8ed4c59aa54a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:28,848 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150228847"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150228847"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150228847"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150228847"}]},"ts":"1690150228847"} 2023-07-23 22:10:28,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:28,848 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:28,848 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:28,849 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=0b7053f21b7f57c08b1dd55e5891c278, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:28,849 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150228849"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150228849"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150228849"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150228849"}]},"ts":"1690150228849"} 2023-07-23 22:10:28,853 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=51 2023-07-23 22:10:28,853 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=51, state=SUCCESS; OpenRegionProcedure 49ac0e5466f1afaa41c8ed4c59aa54a9, server=jenkins-hbase4.apache.org,39885,1690150225039 in 269 msec 2023-07-23 22:10:28,855 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=54 2023-07-23 22:10:28,855 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=54, state=SUCCESS; OpenRegionProcedure 0b7053f21b7f57c08b1dd55e5891c278, server=jenkins-hbase4.apache.org,34191,1690150220233 in 268 msec 2023-07-23 22:10:28,856 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49ac0e5466f1afaa41c8ed4c59aa54a9, ASSIGN in 439 msec 2023-07-23 22:10:28,857 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-23 22:10:28,857 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b7053f21b7f57c08b1dd55e5891c278, ASSIGN in 438 msec 2023-07-23 22:10:28,858 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150228857"}]},"ts":"1690150228857"} 2023-07-23 22:10:28,859 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-23 22:10:28,862 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-23 22:10:28,864 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 722 msec 2023-07-23 22:10:29,075 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 22:10:29,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 22:10:29,259 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-23 22:10:29,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:29,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:29,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:29,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:29,264 INFO [Listener at localhost/42675] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,279 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150229279"}]},"ts":"1690150229279"} 2023-07-23 22:10:29,281 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-23 22:10:29,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-23 22:10:29,283 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-23 22:10:29,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab6b2037ec108fdd106a9e330ec55fa0, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49ac0e5466f1afaa41c8ed4c59aa54a9, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7c6f583fe6042537278d3be8330817a9, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95257a81280ed1e6e3867f082bd48802, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b7053f21b7f57c08b1dd55e5891c278, UNASSIGN}] 2023-07-23 22:10:29,292 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b7053f21b7f57c08b1dd55e5891c278, UNASSIGN 2023-07-23 22:10:29,294 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95257a81280ed1e6e3867f082bd48802, UNASSIGN 2023-07-23 22:10:29,294 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7c6f583fe6042537278d3be8330817a9, UNASSIGN 2023-07-23 22:10:29,295 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49ac0e5466f1afaa41c8ed4c59aa54a9, UNASSIGN 2023-07-23 22:10:29,295 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab6b2037ec108fdd106a9e330ec55fa0, UNASSIGN 2023-07-23 22:10:29,303 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=95257a81280ed1e6e3867f082bd48802, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:29,303 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=7c6f583fe6042537278d3be8330817a9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:29,303 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150229303"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150229303"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150229303"}]},"ts":"1690150229303"} 2023-07-23 22:10:29,303 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150229303"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150229303"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150229303"}]},"ts":"1690150229303"} 2023-07-23 22:10:29,303 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=0b7053f21b7f57c08b1dd55e5891c278, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:29,303 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=49ac0e5466f1afaa41c8ed4c59aa54a9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:29,304 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150229303"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150229303"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150229303"}]},"ts":"1690150229303"} 2023-07-23 22:10:29,304 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150229303"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150229303"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150229303"}]},"ts":"1690150229303"} 2023-07-23 22:10:29,303 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=ab6b2037ec108fdd106a9e330ec55fa0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:29,304 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150229303"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150229303"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150229303"}]},"ts":"1690150229303"} 2023-07-23 22:10:29,307 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=63, state=RUNNABLE; CloseRegionProcedure 7c6f583fe6042537278d3be8330817a9, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:29,309 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=64, state=RUNNABLE; CloseRegionProcedure 95257a81280ed1e6e3867f082bd48802, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:29,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=65, state=RUNNABLE; CloseRegionProcedure 0b7053f21b7f57c08b1dd55e5891c278, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:29,311 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=62, state=RUNNABLE; CloseRegionProcedure 49ac0e5466f1afaa41c8ed4c59aa54a9, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:29,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=61, state=RUNNABLE; CloseRegionProcedure ab6b2037ec108fdd106a9e330ec55fa0, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:29,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-23 22:10:29,463 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:29,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7c6f583fe6042537278d3be8330817a9, disabling compactions & flushes 2023-07-23 22:10:29,465 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:29,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:29,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. after waiting 0 ms 2023-07-23 22:10:29,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:29,465 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:29,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49ac0e5466f1afaa41c8ed4c59aa54a9, disabling compactions & flushes 2023-07-23 22:10:29,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:29,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:29,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. after waiting 0 ms 2023-07-23 22:10:29,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:29,471 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:29,471 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:29,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9. 2023-07-23 22:10:29,473 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49ac0e5466f1afaa41c8ed4c59aa54a9: 2023-07-23 22:10:29,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9. 2023-07-23 22:10:29,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7c6f583fe6042537278d3be8330817a9: 2023-07-23 22:10:29,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:29,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:29,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab6b2037ec108fdd106a9e330ec55fa0, disabling compactions & flushes 2023-07-23 22:10:29,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:29,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:29,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. after waiting 0 ms 2023-07-23 22:10:29,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:29,479 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=7c6f583fe6042537278d3be8330817a9, regionState=CLOSED 2023-07-23 22:10:29,479 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150229479"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150229479"}]},"ts":"1690150229479"} 2023-07-23 22:10:29,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:29,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:29,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 95257a81280ed1e6e3867f082bd48802, disabling compactions & flushes 2023-07-23 22:10:29,482 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:29,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:29,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. after waiting 0 ms 2023-07-23 22:10:29,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:29,483 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=49ac0e5466f1afaa41c8ed4c59aa54a9, regionState=CLOSED 2023-07-23 22:10:29,483 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150229483"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150229483"}]},"ts":"1690150229483"} 2023-07-23 22:10:29,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:29,489 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0. 2023-07-23 22:10:29,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab6b2037ec108fdd106a9e330ec55fa0: 2023-07-23 22:10:29,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:29,489 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=62 2023-07-23 22:10:29,489 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=63 2023-07-23 22:10:29,489 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=62, state=SUCCESS; CloseRegionProcedure 49ac0e5466f1afaa41c8ed4c59aa54a9, server=jenkins-hbase4.apache.org,39885,1690150225039 in 174 msec 2023-07-23 22:10:29,489 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; CloseRegionProcedure 7c6f583fe6042537278d3be8330817a9, server=jenkins-hbase4.apache.org,34191,1690150220233 in 178 msec 2023-07-23 22:10:29,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802. 2023-07-23 22:10:29,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 95257a81280ed1e6e3867f082bd48802: 2023-07-23 22:10:29,492 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:29,492 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:29,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0b7053f21b7f57c08b1dd55e5891c278, disabling compactions & flushes 2023-07-23 22:10:29,493 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:29,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:29,493 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7c6f583fe6042537278d3be8330817a9, UNASSIGN in 200 msec 2023-07-23 22:10:29,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. after waiting 0 ms 2023-07-23 22:10:29,493 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=ab6b2037ec108fdd106a9e330ec55fa0, regionState=CLOSED 2023-07-23 22:10:29,493 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=49ac0e5466f1afaa41c8ed4c59aa54a9, UNASSIGN in 200 msec 2023-07-23 22:10:29,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:29,494 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150229493"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150229493"}]},"ts":"1690150229493"} 2023-07-23 22:10:29,494 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:29,495 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=95257a81280ed1e6e3867f082bd48802, regionState=CLOSED 2023-07-23 22:10:29,495 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690150229495"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150229495"}]},"ts":"1690150229495"} 2023-07-23 22:10:29,499 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=61 2023-07-23 22:10:29,499 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=61, state=SUCCESS; CloseRegionProcedure ab6b2037ec108fdd106a9e330ec55fa0, server=jenkins-hbase4.apache.org,34191,1690150220233 in 184 msec 2023-07-23 22:10:29,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:29,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=64 2023-07-23 22:10:29,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=64, state=SUCCESS; CloseRegionProcedure 95257a81280ed1e6e3867f082bd48802, server=jenkins-hbase4.apache.org,39885,1690150225039 in 188 msec 2023-07-23 22:10:29,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278. 2023-07-23 22:10:29,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0b7053f21b7f57c08b1dd55e5891c278: 2023-07-23 22:10:29,501 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab6b2037ec108fdd106a9e330ec55fa0, UNASSIGN in 210 msec 2023-07-23 22:10:29,502 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=95257a81280ed1e6e3867f082bd48802, UNASSIGN in 211 msec 2023-07-23 22:10:29,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:29,503 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=0b7053f21b7f57c08b1dd55e5891c278, regionState=CLOSED 2023-07-23 22:10:29,503 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690150229503"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150229503"}]},"ts":"1690150229503"} 2023-07-23 22:10:29,507 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=65 2023-07-23 22:10:29,507 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=65, state=SUCCESS; CloseRegionProcedure 0b7053f21b7f57c08b1dd55e5891c278, server=jenkins-hbase4.apache.org,34191,1690150220233 in 195 msec 2023-07-23 22:10:29,509 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=60 2023-07-23 22:10:29,509 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b7053f21b7f57c08b1dd55e5891c278, UNASSIGN in 218 msec 2023-07-23 22:10:29,510 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150229510"}]},"ts":"1690150229510"} 2023-07-23 22:10:29,512 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-23 22:10:29,514 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-23 22:10:29,517 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 250 msec 2023-07-23 22:10:29,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-23 22:10:29,586 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-23 22:10:29,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,601 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_627083458' 2023-07-23 22:10:29,603 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:29,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:29,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:29,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-23 22:10:29,620 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:29,620 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:29,620 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:29,621 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:29,621 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:29,624 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9/recovered.edits] 2023-07-23 22:10:29,624 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9/recovered.edits] 2023-07-23 22:10:29,624 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0/recovered.edits] 2023-07-23 22:10:29,624 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278/recovered.edits] 2023-07-23 22:10:29,625 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802/recovered.edits] 2023-07-23 22:10:29,634 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9/recovered.edits/4.seqid 2023-07-23 22:10:29,635 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278/recovered.edits/4.seqid 2023-07-23 22:10:29,636 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802/recovered.edits/4.seqid 2023-07-23 22:10:29,636 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7c6f583fe6042537278d3be8330817a9 2023-07-23 22:10:29,636 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b7053f21b7f57c08b1dd55e5891c278 2023-07-23 22:10:29,638 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9/recovered.edits/4.seqid 2023-07-23 22:10:29,638 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/95257a81280ed1e6e3867f082bd48802 2023-07-23 22:10:29,638 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/49ac0e5466f1afaa41c8ed4c59aa54a9 2023-07-23 22:10:29,639 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0/recovered.edits/4.seqid 2023-07-23 22:10:29,639 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab6b2037ec108fdd106a9e330ec55fa0 2023-07-23 22:10:29,640 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 22:10:29,642 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,648 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-23 22:10:29,650 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-23 22:10:29,651 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,651 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-23 22:10:29,652 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150229652"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:29,652 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150229652"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:29,652 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150229652"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:29,652 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150229652"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:29,652 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150229652"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:29,654 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-23 22:10:29,654 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ab6b2037ec108fdd106a9e330ec55fa0, NAME => 'Group_testTableMoveTruncateAndDrop,,1690150228200.ab6b2037ec108fdd106a9e330ec55fa0.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 49ac0e5466f1afaa41c8ed4c59aa54a9, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690150228200.49ac0e5466f1afaa41c8ed4c59aa54a9.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 7c6f583fe6042537278d3be8330817a9, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690150228200.7c6f583fe6042537278d3be8330817a9.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 95257a81280ed1e6e3867f082bd48802, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690150228200.95257a81280ed1e6e3867f082bd48802.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 0b7053f21b7f57c08b1dd55e5891c278, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690150228200.0b7053f21b7f57c08b1dd55e5891c278.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-23 22:10:29,654 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-23 22:10:29,654 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690150229654"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:29,656 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-23 22:10:29,658 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 22:10:29,659 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 66 msec 2023-07-23 22:10:29,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-23 22:10:29,722 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-23 22:10:29,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:29,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:29,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:29,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:29,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:29,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup default 2023-07-23 22:10:29,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:29,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:29,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:29,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_627083458, current retry=0 2023-07-23 22:10:29,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039] are moved back to Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:29,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_627083458 => default 2023-07-23 22:10:29,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:29,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_627083458 2023-07-23 22:10:29,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:29,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 22:10:29,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:29,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:29,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:29,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:29,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:29,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:29,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:29,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:29,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:29,777 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:29,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:29,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:29,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:29,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:29,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:29,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:29,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151429795, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:29,796 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:29,799 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:29,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,800 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:29,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:29,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:29,838 INFO [Listener at localhost/42675] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=493 (was 420) Potentially hanging thread: PacketResponder: BP-1350916087-172.31.14.131-1690150214233:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-636-acceptor-0@2078e5e5-ServerConnector@6057e31f{HTTP/1.1, (http/1.1)}{0.0.0.0:35271} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:39885Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52385@0x4f9c4082 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/498334846.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1918914111_17 at /127.0.0.1:56614 [Receiving block BP-1350916087-172.31.14.131-1690150214233:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:39885-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52385@0x4f9c4082-SendThread(127.0.0.1:52385) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1918914111_17 at /127.0.0.1:33532 [Receiving block BP-1350916087-172.31.14.131-1690150214233:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1918914111_17 at /127.0.0.1:56760 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1350916087-172.31.14.131-1690150214233:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:36271 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1918914111_17 at /127.0.0.1:46150 [Receiving block BP-1350916087-172.31.14.131-1690150214233:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:39885 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-635 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-70f9750-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1350916087-172.31.14.131-1690150214233:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:36271 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:52385@0x4f9c4082-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1400815784-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1648874798_17 at /127.0.0.1:56684 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9-prefix:jenkins-hbase4.apache.org,39885,1690150225039 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1648874798_17 at /127.0.0.1:33516 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39885 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=772 (was 675) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=468 (was 465) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 178), AvailableMemoryMB=6272 (was 6533) 2023-07-23 22:10:29,866 INFO [Listener at localhost/42675] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=493, OpenFileDescriptor=772, MaxFileDescriptor=60000, SystemLoadAverage=468, ProcessCount=176, AvailableMemoryMB=6270 2023-07-23 22:10:29,866 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-23 22:10:29,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:29,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:29,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:29,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:29,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:29,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:29,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:29,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:29,902 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:29,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:29,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:29,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:29,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:29,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:29,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:29,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151429919, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:29,920 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:29,922 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:29,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,924 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:29,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:29,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:29,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-23 22:10:29,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:29,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:53220 deadline: 1690151429926, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-23 22:10:29,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-23 22:10:29,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:29,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:53220 deadline: 1690151429928, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-23 22:10:29,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-23 22:10:29,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:29,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:53220 deadline: 1690151429929, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-23 22:10:29,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-23 22:10:29,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-23 22:10:29,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:29,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:29,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:29,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:29,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:29,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:29,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:29,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:29,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-23 22:10:29,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:29,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 22:10:29,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:29,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:29,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:29,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:29,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:29,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:29,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:29,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:29,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:29,974 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:29,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:29,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:29,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:29,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:29,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:29,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:29,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:29,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151429992, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:29,993 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:29,995 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:29,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:29,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:29,996 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:29,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:29,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:30,015 INFO [Listener at localhost/42675] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=496 (was 493) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=772 (was 772), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=468 (was 468), ProcessCount=176 (was 176), AvailableMemoryMB=6267 (was 6270) 2023-07-23 22:10:30,035 INFO [Listener at localhost/42675] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=496, OpenFileDescriptor=772, MaxFileDescriptor=60000, SystemLoadAverage=468, ProcessCount=176, AvailableMemoryMB=6264 2023-07-23 22:10:30,036 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-23 22:10:30,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:30,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:30,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:30,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:30,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:30,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:30,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:30,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:30,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:30,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:30,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:30,053 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:30,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:30,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:30,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:30,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:30,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:30,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:30,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:30,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:30,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:30,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151430070, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:30,071 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:30,073 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:30,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:30,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:30,075 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:30,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:30,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:30,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:30,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:30,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:30,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:30,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-23 22:10:30,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:30,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 22:10:30,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:30,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:30,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:30,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:30,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:30,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup bar 2023-07-23 22:10:30,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:30,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 22:10:30,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:30,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:30,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(238): Moving server region f13ed9a05b812dd1ab7a8c5d46530103, which do not belong to RSGroup bar 2023-07-23 22:10:30,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=f13ed9a05b812dd1ab7a8c5d46530103, REOPEN/MOVE 2023-07-23 22:10:30,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 22:10:30,101 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=f13ed9a05b812dd1ab7a8c5d46530103, REOPEN/MOVE 2023-07-23 22:10:30,102 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f13ed9a05b812dd1ab7a8c5d46530103, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:30,102 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150230102"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150230102"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150230102"}]},"ts":"1690150230102"} 2023-07-23 22:10:30,104 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE; CloseRegionProcedure f13ed9a05b812dd1ab7a8c5d46530103, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:30,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:30,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f13ed9a05b812dd1ab7a8c5d46530103, disabling compactions & flushes 2023-07-23 22:10:30,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:30,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:30,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. after waiting 0 ms 2023-07-23 22:10:30,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:30,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f13ed9a05b812dd1ab7a8c5d46530103 1/1 column families, dataSize=6.36 KB heapSize=10.50 KB 2023-07-23 22:10:30,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.36 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/.tmp/m/60e739522c374a4a9242573f2af24a14 2023-07-23 22:10:30,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 60e739522c374a4a9242573f2af24a14 2023-07-23 22:10:30,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/.tmp/m/60e739522c374a4a9242573f2af24a14 as hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/m/60e739522c374a4a9242573f2af24a14 2023-07-23 22:10:30,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 60e739522c374a4a9242573f2af24a14 2023-07-23 22:10:30,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/m/60e739522c374a4a9242573f2af24a14, entries=9, sequenceid=26, filesize=5.5 K 2023-07-23 22:10:30,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.36 KB/6514, heapSize ~10.48 KB/10736, currentSize=0 B/0 for f13ed9a05b812dd1ab7a8c5d46530103 in 237ms, sequenceid=26, compaction requested=false 2023-07-23 22:10:30,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-23 22:10:30,528 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:30,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:30,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f13ed9a05b812dd1ab7a8c5d46530103: 2023-07-23 22:10:30,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f13ed9a05b812dd1ab7a8c5d46530103 move to jenkins-hbase4.apache.org,46085,1690150220016 record at close sequenceid=26 2023-07-23 22:10:30,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:30,534 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f13ed9a05b812dd1ab7a8c5d46530103, regionState=CLOSED 2023-07-23 22:10:30,535 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150230534"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150230534"}]},"ts":"1690150230534"} 2023-07-23 22:10:30,548 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-23 22:10:30,548 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; CloseRegionProcedure f13ed9a05b812dd1ab7a8c5d46530103, server=jenkins-hbase4.apache.org,41457,1690150220404 in 436 msec 2023-07-23 22:10:30,549 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f13ed9a05b812dd1ab7a8c5d46530103, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:30,699 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f13ed9a05b812dd1ab7a8c5d46530103, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:30,700 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150230699"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150230699"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150230699"}]},"ts":"1690150230699"} 2023-07-23 22:10:30,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=72, state=RUNNABLE; OpenRegionProcedure f13ed9a05b812dd1ab7a8c5d46530103, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:30,859 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:30,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f13ed9a05b812dd1ab7a8c5d46530103, NAME => 'hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:30,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 22:10:30,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. service=MultiRowMutationService 2023-07-23 22:10:30,860 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 22:10:30,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:30,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:30,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:30,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:30,867 INFO [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:30,868 DEBUG [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/m 2023-07-23 22:10:30,868 DEBUG [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/m 2023-07-23 22:10:30,868 INFO [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f13ed9a05b812dd1ab7a8c5d46530103 columnFamilyName m 2023-07-23 22:10:30,883 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 60e739522c374a4a9242573f2af24a14 2023-07-23 22:10:30,883 DEBUG [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] regionserver.HStore(539): loaded hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/m/60e739522c374a4a9242573f2af24a14 2023-07-23 22:10:30,884 INFO [StoreOpener-f13ed9a05b812dd1ab7a8c5d46530103-1] regionserver.HStore(310): Store=f13ed9a05b812dd1ab7a8c5d46530103/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:30,885 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:30,887 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:30,894 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:30,895 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f13ed9a05b812dd1ab7a8c5d46530103; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1e3e766b, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:30,895 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f13ed9a05b812dd1ab7a8c5d46530103: 2023-07-23 22:10:30,896 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103., pid=74, masterSystemTime=1690150230854 2023-07-23 22:10:30,898 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:30,898 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:30,898 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f13ed9a05b812dd1ab7a8c5d46530103, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:30,898 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150230898"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150230898"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150230898"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150230898"}]},"ts":"1690150230898"} 2023-07-23 22:10:30,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=72 2023-07-23 22:10:30,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=72, state=SUCCESS; OpenRegionProcedure f13ed9a05b812dd1ab7a8c5d46530103, server=jenkins-hbase4.apache.org,46085,1690150220016 in 198 msec 2023-07-23 22:10:30,904 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f13ed9a05b812dd1ab7a8c5d46530103, REOPEN/MOVE in 803 msec 2023-07-23 22:10:31,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure.ProcedureSyncWait(216): waitFor pid=72 2023-07-23 22:10:31,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039, jenkins-hbase4.apache.org,41457,1690150220404] are moved back to default 2023-07-23 22:10:31,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-23 22:10:31,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:31,103 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41457] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:38202 deadline: 1690150291103, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46085 startCode=1690150220016. As of locationSeqNum=26. 2023-07-23 22:10:31,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:31,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:31,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-23 22:10:31,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:31,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:31,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-23 22:10:31,234 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:31,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 75 2023-07-23 22:10:31,235 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41457] ipc.CallRunner(144): callId: 179 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:38194 deadline: 1690150291235, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46085 startCode=1690150220016. As of locationSeqNum=26. 2023-07-23 22:10:31,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-23 22:10:31,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-23 22:10:31,340 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:31,341 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 22:10:31,342 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:31,342 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:31,346 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:31,348 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:31,349 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d empty. 2023-07-23 22:10:31,349 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:31,349 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-23 22:10:31,372 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:31,373 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => df611f47dda86121f8b40a9bedd25c0d, NAME => 'Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:31,388 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:31,388 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing df611f47dda86121f8b40a9bedd25c0d, disabling compactions & flushes 2023-07-23 22:10:31,388 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:31,389 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:31,389 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. after waiting 0 ms 2023-07-23 22:10:31,389 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:31,389 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:31,389 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for df611f47dda86121f8b40a9bedd25c0d: 2023-07-23 22:10:31,391 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:31,393 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150231393"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150231393"}]},"ts":"1690150231393"} 2023-07-23 22:10:31,394 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:31,396 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:31,396 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150231396"}]},"ts":"1690150231396"} 2023-07-23 22:10:31,398 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-23 22:10:31,405 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, ASSIGN}] 2023-07-23 22:10:31,408 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, ASSIGN 2023-07-23 22:10:31,409 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=76, ppid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:31,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-23 22:10:31,560 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:31,561 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150231560"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150231560"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150231560"}]},"ts":"1690150231560"} 2023-07-23 22:10:31,565 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=76, state=RUNNABLE; OpenRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:31,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:31,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => df611f47dda86121f8b40a9bedd25c0d, NAME => 'Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:31,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:31,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:31,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:31,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:31,723 INFO [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:31,725 DEBUG [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/f 2023-07-23 22:10:31,725 DEBUG [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/f 2023-07-23 22:10:31,726 INFO [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region df611f47dda86121f8b40a9bedd25c0d columnFamilyName f 2023-07-23 22:10:31,727 INFO [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] regionserver.HStore(310): Store=df611f47dda86121f8b40a9bedd25c0d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:31,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:31,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:31,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:31,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:31,735 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened df611f47dda86121f8b40a9bedd25c0d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9549439360, jitterRate=-0.11063915491104126}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:31,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for df611f47dda86121f8b40a9bedd25c0d: 2023-07-23 22:10:31,736 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d., pid=77, masterSystemTime=1690150231716 2023-07-23 22:10:31,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:31,738 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:31,739 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=76 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:31,739 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150231739"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150231739"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150231739"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150231739"}]},"ts":"1690150231739"} 2023-07-23 22:10:31,743 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=76 2023-07-23 22:10:31,743 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=76, state=SUCCESS; OpenRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,46085,1690150220016 in 179 msec 2023-07-23 22:10:31,747 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-23 22:10:31,747 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, ASSIGN in 338 msec 2023-07-23 22:10:31,747 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:31,748 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150231748"}]},"ts":"1690150231748"} 2023-07-23 22:10:31,749 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-23 22:10:31,753 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=75, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:31,754 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 522 msec 2023-07-23 22:10:31,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=75 2023-07-23 22:10:31,841 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 75 completed 2023-07-23 22:10:31,841 DEBUG [Listener at localhost/42675] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-23 22:10:31,841 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:31,846 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-23 22:10:31,846 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:31,846 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-23 22:10:31,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-23 22:10:31,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:31,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 22:10:31,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:31,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:31,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-23 22:10:31,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region df611f47dda86121f8b40a9bedd25c0d to RSGroup bar 2023-07-23 22:10:31,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:31,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:31,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:31,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:31,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 22:10:31,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:31,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, REOPEN/MOVE 2023-07-23 22:10:31,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-23 22:10:31,868 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, REOPEN/MOVE 2023-07-23 22:10:31,869 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:31,869 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150231868"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150231868"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150231868"}]},"ts":"1690150231868"} 2023-07-23 22:10:31,870 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:32,026 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:32,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing df611f47dda86121f8b40a9bedd25c0d, disabling compactions & flushes 2023-07-23 22:10:32,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:32,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:32,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. after waiting 0 ms 2023-07-23 22:10:32,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:32,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:32,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:32,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for df611f47dda86121f8b40a9bedd25c0d: 2023-07-23 22:10:32,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding df611f47dda86121f8b40a9bedd25c0d move to jenkins-hbase4.apache.org,41457,1690150220404 record at close sequenceid=2 2023-07-23 22:10:32,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:32,040 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=CLOSED 2023-07-23 22:10:32,040 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150232040"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150232040"}]},"ts":"1690150232040"} 2023-07-23 22:10:32,047 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-23 22:10:32,047 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,46085,1690150220016 in 173 msec 2023-07-23 22:10:32,049 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:32,199 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:32,199 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:32,200 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150232199"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150232199"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150232199"}]},"ts":"1690150232199"} 2023-07-23 22:10:32,201 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:32,358 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:32,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => df611f47dda86121f8b40a9bedd25c0d, NAME => 'Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:32,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:32,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:32,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:32,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:32,360 INFO [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:32,361 DEBUG [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/f 2023-07-23 22:10:32,361 DEBUG [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/f 2023-07-23 22:10:32,362 INFO [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region df611f47dda86121f8b40a9bedd25c0d columnFamilyName f 2023-07-23 22:10:32,363 INFO [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] regionserver.HStore(310): Store=df611f47dda86121f8b40a9bedd25c0d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:32,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:32,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:32,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:32,370 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened df611f47dda86121f8b40a9bedd25c0d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11512577600, jitterRate=0.07219234108924866}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:32,370 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for df611f47dda86121f8b40a9bedd25c0d: 2023-07-23 22:10:32,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d., pid=80, masterSystemTime=1690150232353 2023-07-23 22:10:32,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:32,373 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:32,373 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:32,374 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150232373"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150232373"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150232373"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150232373"}]},"ts":"1690150232373"} 2023-07-23 22:10:32,378 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-23 22:10:32,378 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,41457,1690150220404 in 174 msec 2023-07-23 22:10:32,379 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, REOPEN/MOVE in 513 msec 2023-07-23 22:10:32,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-23 22:10:32,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-23 22:10:32,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:32,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:32,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:32,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-23 22:10:32,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:32,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-23 22:10:32,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:32,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 284 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:53220 deadline: 1690151432877, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-23 22:10:32,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup default 2023-07-23 22:10:32,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:32,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:53220 deadline: 1690151432878, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-23 22:10:32,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-23 22:10:32,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:32,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 22:10:32,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:32,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:32,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-23 22:10:32,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region df611f47dda86121f8b40a9bedd25c0d to RSGroup default 2023-07-23 22:10:32,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, REOPEN/MOVE 2023-07-23 22:10:32,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 22:10:32,889 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, REOPEN/MOVE 2023-07-23 22:10:32,890 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:32,890 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150232890"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150232890"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150232890"}]},"ts":"1690150232890"} 2023-07-23 22:10:32,892 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:33,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:33,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing df611f47dda86121f8b40a9bedd25c0d, disabling compactions & flushes 2023-07-23 22:10:33,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:33,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:33,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. after waiting 0 ms 2023-07-23 22:10:33,048 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:33,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:33,053 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:33,053 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for df611f47dda86121f8b40a9bedd25c0d: 2023-07-23 22:10:33,053 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding df611f47dda86121f8b40a9bedd25c0d move to jenkins-hbase4.apache.org,46085,1690150220016 record at close sequenceid=5 2023-07-23 22:10:33,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:33,056 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=CLOSED 2023-07-23 22:10:33,056 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150233056"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150233056"}]},"ts":"1690150233056"} 2023-07-23 22:10:33,059 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-23 22:10:33,059 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,41457,1690150220404 in 165 msec 2023-07-23 22:10:33,060 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:33,210 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:33,211 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150233210"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150233210"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150233210"}]},"ts":"1690150233210"} 2023-07-23 22:10:33,213 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:33,368 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:33,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => df611f47dda86121f8b40a9bedd25c0d, NAME => 'Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:33,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:33,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:33,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:33,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:33,371 INFO [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:33,372 DEBUG [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/f 2023-07-23 22:10:33,372 DEBUG [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/f 2023-07-23 22:10:33,373 INFO [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region df611f47dda86121f8b40a9bedd25c0d columnFamilyName f 2023-07-23 22:10:33,373 INFO [StoreOpener-df611f47dda86121f8b40a9bedd25c0d-1] regionserver.HStore(310): Store=df611f47dda86121f8b40a9bedd25c0d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:33,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:33,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:33,379 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:33,380 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened df611f47dda86121f8b40a9bedd25c0d; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10292543840, jitterRate=-0.041432157158851624}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:33,380 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for df611f47dda86121f8b40a9bedd25c0d: 2023-07-23 22:10:33,380 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d., pid=83, masterSystemTime=1690150233364 2023-07-23 22:10:33,382 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:33,382 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:33,383 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:33,383 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150233382"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150233382"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150233382"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150233382"}]},"ts":"1690150233382"} 2023-07-23 22:10:33,386 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-23 22:10:33,386 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,46085,1690150220016 in 172 msec 2023-07-23 22:10:33,388 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, REOPEN/MOVE in 499 msec 2023-07-23 22:10:33,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-23 22:10:33,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-23 22:10:33,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:33,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:33,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:33,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-23 22:10:33,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:33,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 293 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:53220 deadline: 1690151433896, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-23 22:10:33,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup default 2023-07-23 22:10:33,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:33,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 22:10:33,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:33,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:33,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-23 22:10:33,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039, jenkins-hbase4.apache.org,41457,1690150220404] are moved back to bar 2023-07-23 22:10:33,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-23 22:10:33,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:33,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:33,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:33,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-23 22:10:33,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:33,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:33,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 22:10:33,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:33,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:33,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:33,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:33,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:33,921 INFO [Listener at localhost/42675] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-23 22:10:33,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-23 22:10:33,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-23 22:10:33,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-23 22:10:33,930 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150233930"}]},"ts":"1690150233930"} 2023-07-23 22:10:33,932 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-23 22:10:33,935 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-23 22:10:33,936 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, UNASSIGN}] 2023-07-23 22:10:33,937 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, UNASSIGN 2023-07-23 22:10:33,938 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:33,938 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150233938"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150233938"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150233938"}]},"ts":"1690150233938"} 2023-07-23 22:10:33,939 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; CloseRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:34,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-23 22:10:34,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:34,092 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing df611f47dda86121f8b40a9bedd25c0d, disabling compactions & flushes 2023-07-23 22:10:34,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:34,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:34,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. after waiting 0 ms 2023-07-23 22:10:34,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:34,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 22:10:34,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d. 2023-07-23 22:10:34,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for df611f47dda86121f8b40a9bedd25c0d: 2023-07-23 22:10:34,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:34,100 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=df611f47dda86121f8b40a9bedd25c0d, regionState=CLOSED 2023-07-23 22:10:34,101 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690150234100"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150234100"}]},"ts":"1690150234100"} 2023-07-23 22:10:34,104 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-23 22:10:34,104 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; CloseRegionProcedure df611f47dda86121f8b40a9bedd25c0d, server=jenkins-hbase4.apache.org,46085,1690150220016 in 163 msec 2023-07-23 22:10:34,105 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-23 22:10:34,105 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=df611f47dda86121f8b40a9bedd25c0d, UNASSIGN in 169 msec 2023-07-23 22:10:34,106 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150234106"}]},"ts":"1690150234106"} 2023-07-23 22:10:34,107 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-23 22:10:34,109 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-23 22:10:34,111 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 188 msec 2023-07-23 22:10:34,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-23 22:10:34,229 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-23 22:10:34,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-23 22:10:34,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 22:10:34,235 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=87, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 22:10:34,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-23 22:10:34,236 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=87, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 22:10:34,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:34,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:34,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:34,241 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:34,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-23 22:10:34,249 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 22:10:34,252 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/recovered.edits] 2023-07-23 22:10:34,266 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/recovered.edits/10.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d/recovered.edits/10.seqid 2023-07-23 22:10:34,267 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testFailRemoveGroup/df611f47dda86121f8b40a9bedd25c0d 2023-07-23 22:10:34,270 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-23 22:10:34,273 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=87, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 22:10:34,284 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-23 22:10:34,287 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-23 22:10:34,289 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=87, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 22:10:34,289 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-23 22:10:34,289 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150234289"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:34,293 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 22:10:34,293 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => df611f47dda86121f8b40a9bedd25c0d, NAME => 'Group_testFailRemoveGroup,,1690150231231.df611f47dda86121f8b40a9bedd25c0d.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 22:10:34,293 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-23 22:10:34,293 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690150234293"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:34,311 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-23 22:10:34,315 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=87, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 22:10:34,317 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 84 msec 2023-07-23 22:10:34,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-23 22:10:34,346 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-23 22:10:34,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:34,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:34,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:34,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:34,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:34,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:34,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:34,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:34,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:34,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:34,372 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 22:10:34,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:34,377 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:34,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:34,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:34,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:34,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:34,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:34,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:34,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:34,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:34,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:34,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 341 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151434396, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:34,397 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:34,399 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:34,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:34,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:34,400 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:34,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:34,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:34,425 INFO [Listener at localhost/42675] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=499 (was 496) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1632019018_17 at /127.0.0.1:56722 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_295696383_17 at /127.0.0.1:56684 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1911868242_17 at /127.0.0.1:57996 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1911868242_17 at /127.0.0.1:57994 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3ed81975-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=774 (was 772) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=501 (was 468) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=6063 (was 6264) 2023-07-23 22:10:34,447 INFO [Listener at localhost/42675] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=499, OpenFileDescriptor=774, MaxFileDescriptor=60000, SystemLoadAverage=501, ProcessCount=176, AvailableMemoryMB=6062 2023-07-23 22:10:34,448 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-23 22:10:34,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:34,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:34,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:34,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:34,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:34,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:34,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:34,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:34,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:34,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:34,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:34,466 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:34,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:34,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:34,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:34,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:34,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:34,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:34,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:34,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:34,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:34,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 369 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151434479, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:34,480 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:34,485 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:34,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:34,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:34,486 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:34,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:34,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:34,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:34,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:34,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1358604544 2023-07-23 22:10:34,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1358604544 2023-07-23 22:10:34,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:34,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:34,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:34,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:34,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:34,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:34,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191] to rsgroup Group_testMultiTableMove_1358604544 2023-07-23 22:10:34,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1358604544 2023-07-23 22:10:34,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:34,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:34,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:34,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 22:10:34,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233] are moved back to default 2023-07-23 22:10:34,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1358604544 2023-07-23 22:10:34,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:34,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:34,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:34,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1358604544 2023-07-23 22:10:34,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:34,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:34,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 22:10:34,520 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:34,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 88 2023-07-23 22:10:34,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 22:10:34,523 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1358604544 2023-07-23 22:10:34,524 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:34,524 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:34,525 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:34,528 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:34,530 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:34,531 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da empty. 2023-07-23 22:10:34,531 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:34,531 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-23 22:10:34,549 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:34,551 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 039587bb8a11cfb43412b6bc2cada9da, NAME => 'GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:34,580 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:34,580 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 039587bb8a11cfb43412b6bc2cada9da, disabling compactions & flushes 2023-07-23 22:10:34,580 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:34,580 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:34,580 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. after waiting 0 ms 2023-07-23 22:10:34,580 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:34,582 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:34,582 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 039587bb8a11cfb43412b6bc2cada9da: 2023-07-23 22:10:34,585 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:34,587 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150234587"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150234587"}]},"ts":"1690150234587"} 2023-07-23 22:10:34,588 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:34,594 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:34,595 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150234595"}]},"ts":"1690150234595"} 2023-07-23 22:10:34,596 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-23 22:10:34,600 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:34,600 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:34,600 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:34,600 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:34,600 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:34,600 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, ASSIGN}] 2023-07-23 22:10:34,602 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, ASSIGN 2023-07-23 22:10:34,603 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:34,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 22:10:34,753 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:34,755 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=039587bb8a11cfb43412b6bc2cada9da, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:34,755 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150234754"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150234754"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150234754"}]},"ts":"1690150234754"} 2023-07-23 22:10:34,757 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure 039587bb8a11cfb43412b6bc2cada9da, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:34,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 22:10:34,912 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:34,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 039587bb8a11cfb43412b6bc2cada9da, NAME => 'GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:34,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:34,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:34,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:34,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:34,914 INFO [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:34,916 DEBUG [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/f 2023-07-23 22:10:34,916 DEBUG [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/f 2023-07-23 22:10:34,916 INFO [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 039587bb8a11cfb43412b6bc2cada9da columnFamilyName f 2023-07-23 22:10:34,917 INFO [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] regionserver.HStore(310): Store=039587bb8a11cfb43412b6bc2cada9da/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:34,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:34,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:34,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:34,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:34,924 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 039587bb8a11cfb43412b6bc2cada9da; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9619928000, jitterRate=-0.1040743887424469}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:34,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 039587bb8a11cfb43412b6bc2cada9da: 2023-07-23 22:10:34,924 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da., pid=90, masterSystemTime=1690150234908 2023-07-23 22:10:34,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:34,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:34,926 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=039587bb8a11cfb43412b6bc2cada9da, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:34,926 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150234926"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150234926"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150234926"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150234926"}]},"ts":"1690150234926"} 2023-07-23 22:10:34,931 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-23 22:10:34,931 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure 039587bb8a11cfb43412b6bc2cada9da, server=jenkins-hbase4.apache.org,41457,1690150220404 in 172 msec 2023-07-23 22:10:34,935 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-23 22:10:34,935 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, ASSIGN in 331 msec 2023-07-23 22:10:34,936 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:34,936 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150234936"}]},"ts":"1690150234936"} 2023-07-23 22:10:34,939 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-23 22:10:34,949 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:34,950 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 432 msec 2023-07-23 22:10:35,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 22:10:35,125 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 88 completed 2023-07-23 22:10:35,126 DEBUG [Listener at localhost/42675] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-23 22:10:35,126 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:35,129 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-23 22:10:35,130 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:35,130 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-23 22:10:35,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:35,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 22:10:35,134 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:35,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 91 2023-07-23 22:10:35,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 22:10:35,137 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1358604544 2023-07-23 22:10:35,137 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:35,138 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:35,138 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:35,142 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:35,144 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,145 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d empty. 2023-07-23 22:10:35,145 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,145 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-23 22:10:35,164 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:35,167 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6963958e217f9174c82f3863a00f116d, NAME => 'GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:35,195 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:35,195 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 6963958e217f9174c82f3863a00f116d, disabling compactions & flushes 2023-07-23 22:10:35,195 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,195 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,195 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. after waiting 0 ms 2023-07-23 22:10:35,195 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,195 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,195 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 6963958e217f9174c82f3863a00f116d: 2023-07-23 22:10:35,205 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:35,206 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150235206"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150235206"}]},"ts":"1690150235206"} 2023-07-23 22:10:35,207 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:35,209 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:35,209 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150235209"}]},"ts":"1690150235209"} 2023-07-23 22:10:35,211 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-23 22:10:35,215 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:35,215 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:35,215 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:35,215 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:35,215 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:35,215 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, ASSIGN}] 2023-07-23 22:10:35,217 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, ASSIGN 2023-07-23 22:10:35,218 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39885,1690150225039; forceNewPlan=false, retain=false 2023-07-23 22:10:35,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 22:10:35,369 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:35,370 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=6963958e217f9174c82f3863a00f116d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:35,370 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150235370"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150235370"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150235370"}]},"ts":"1690150235370"} 2023-07-23 22:10:35,372 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 6963958e217f9174c82f3863a00f116d, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:35,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 22:10:35,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6963958e217f9174c82f3863a00f116d, NAME => 'GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:35,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:35,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,535 INFO [StoreOpener-6963958e217f9174c82f3863a00f116d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,537 DEBUG [StoreOpener-6963958e217f9174c82f3863a00f116d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/f 2023-07-23 22:10:35,537 DEBUG [StoreOpener-6963958e217f9174c82f3863a00f116d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/f 2023-07-23 22:10:35,537 INFO [StoreOpener-6963958e217f9174c82f3863a00f116d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6963958e217f9174c82f3863a00f116d columnFamilyName f 2023-07-23 22:10:35,538 INFO [StoreOpener-6963958e217f9174c82f3863a00f116d-1] regionserver.HStore(310): Store=6963958e217f9174c82f3863a00f116d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:35,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:35,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6963958e217f9174c82f3863a00f116d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11507090560, jitterRate=0.07168132066726685}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:35,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6963958e217f9174c82f3863a00f116d: 2023-07-23 22:10:35,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d., pid=93, masterSystemTime=1690150235524 2023-07-23 22:10:35,550 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,550 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,551 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=6963958e217f9174c82f3863a00f116d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:35,551 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150235550"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150235550"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150235550"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150235550"}]},"ts":"1690150235550"} 2023-07-23 22:10:35,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-23 22:10:35,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 6963958e217f9174c82f3863a00f116d, server=jenkins-hbase4.apache.org,39885,1690150225039 in 180 msec 2023-07-23 22:10:35,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-23 22:10:35,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, ASSIGN in 340 msec 2023-07-23 22:10:35,558 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:35,558 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150235558"}]},"ts":"1690150235558"} 2023-07-23 22:10:35,560 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-23 22:10:35,562 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:35,564 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 430 msec 2023-07-23 22:10:35,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 22:10:35,740 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 91 completed 2023-07-23 22:10:35,740 DEBUG [Listener at localhost/42675] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-23 22:10:35,740 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:35,744 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-23 22:10:35,744 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:35,744 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-23 22:10:35,745 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:35,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-23 22:10:35,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:35,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-23 22:10:35,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:35,766 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1358604544 2023-07-23 22:10:35,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1358604544 2023-07-23 22:10:35,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1358604544 2023-07-23 22:10:35,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:35,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:35,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:35,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1358604544 2023-07-23 22:10:35,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region 6963958e217f9174c82f3863a00f116d to RSGroup Group_testMultiTableMove_1358604544 2023-07-23 22:10:35,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, REOPEN/MOVE 2023-07-23 22:10:35,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1358604544 2023-07-23 22:10:35,780 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, REOPEN/MOVE 2023-07-23 22:10:35,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region 039587bb8a11cfb43412b6bc2cada9da to RSGroup Group_testMultiTableMove_1358604544 2023-07-23 22:10:35,784 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=6963958e217f9174c82f3863a00f116d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:35,784 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150235784"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150235784"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150235784"}]},"ts":"1690150235784"} 2023-07-23 22:10:35,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, REOPEN/MOVE 2023-07-23 22:10:35,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1358604544, current retry=0 2023-07-23 22:10:35,786 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, REOPEN/MOVE 2023-07-23 22:10:35,788 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=039587bb8a11cfb43412b6bc2cada9da, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:35,788 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150235788"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150235788"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150235788"}]},"ts":"1690150235788"} 2023-07-23 22:10:35,791 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=94, state=RUNNABLE; CloseRegionProcedure 6963958e217f9174c82f3863a00f116d, server=jenkins-hbase4.apache.org,39885,1690150225039}] 2023-07-23 22:10:35,793 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=95, state=RUNNABLE; CloseRegionProcedure 039587bb8a11cfb43412b6bc2cada9da, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:35,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6963958e217f9174c82f3863a00f116d, disabling compactions & flushes 2023-07-23 22:10:35,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. after waiting 0 ms 2023-07-23 22:10:35,950 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:35,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 039587bb8a11cfb43412b6bc2cada9da, disabling compactions & flushes 2023-07-23 22:10:35,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:35,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:35,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. after waiting 0 ms 2023-07-23 22:10:35,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:35,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:35,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:35,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:35,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:35,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 039587bb8a11cfb43412b6bc2cada9da: 2023-07-23 22:10:35,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6963958e217f9174c82f3863a00f116d: 2023-07-23 22:10:35,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 039587bb8a11cfb43412b6bc2cada9da move to jenkins-hbase4.apache.org,34191,1690150220233 record at close sequenceid=2 2023-07-23 22:10:35,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6963958e217f9174c82f3863a00f116d move to jenkins-hbase4.apache.org,34191,1690150220233 record at close sequenceid=2 2023-07-23 22:10:35,963 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:35,964 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=6963958e217f9174c82f3863a00f116d, regionState=CLOSED 2023-07-23 22:10:35,964 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150235964"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150235964"}]},"ts":"1690150235964"} 2023-07-23 22:10:35,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:35,965 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=039587bb8a11cfb43412b6bc2cada9da, regionState=CLOSED 2023-07-23 22:10:35,965 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150235965"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150235965"}]},"ts":"1690150235965"} 2023-07-23 22:10:35,974 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=94 2023-07-23 22:10:35,974 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=95 2023-07-23 22:10:35,974 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=94, state=SUCCESS; CloseRegionProcedure 6963958e217f9174c82f3863a00f116d, server=jenkins-hbase4.apache.org,39885,1690150225039 in 179 msec 2023-07-23 22:10:35,974 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=95, state=SUCCESS; CloseRegionProcedure 039587bb8a11cfb43412b6bc2cada9da, server=jenkins-hbase4.apache.org,41457,1690150220404 in 175 msec 2023-07-23 22:10:35,975 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=95, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34191,1690150220233; forceNewPlan=false, retain=false 2023-07-23 22:10:35,975 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34191,1690150220233; forceNewPlan=false, retain=false 2023-07-23 22:10:36,125 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=039587bb8a11cfb43412b6bc2cada9da, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:36,125 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=6963958e217f9174c82f3863a00f116d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:36,126 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150236125"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150236125"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150236125"}]},"ts":"1690150236125"} 2023-07-23 22:10:36,126 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150236125"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150236125"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150236125"}]},"ts":"1690150236125"} 2023-07-23 22:10:36,127 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=95, state=RUNNABLE; OpenRegionProcedure 039587bb8a11cfb43412b6bc2cada9da, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:36,128 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=94, state=RUNNABLE; OpenRegionProcedure 6963958e217f9174c82f3863a00f116d, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:36,284 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:36,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6963958e217f9174c82f3863a00f116d, NAME => 'GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:36,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:36,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:36,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:36,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:36,286 INFO [StoreOpener-6963958e217f9174c82f3863a00f116d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:36,287 DEBUG [StoreOpener-6963958e217f9174c82f3863a00f116d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/f 2023-07-23 22:10:36,287 DEBUG [StoreOpener-6963958e217f9174c82f3863a00f116d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/f 2023-07-23 22:10:36,288 INFO [StoreOpener-6963958e217f9174c82f3863a00f116d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6963958e217f9174c82f3863a00f116d columnFamilyName f 2023-07-23 22:10:36,288 INFO [StoreOpener-6963958e217f9174c82f3863a00f116d-1] regionserver.HStore(310): Store=6963958e217f9174c82f3863a00f116d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:36,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:36,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:36,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:36,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6963958e217f9174c82f3863a00f116d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10078124800, jitterRate=-0.06140148639678955}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:36,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6963958e217f9174c82f3863a00f116d: 2023-07-23 22:10:36,296 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d., pid=99, masterSystemTime=1690150236279 2023-07-23 22:10:36,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:36,298 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:36,298 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:36,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 039587bb8a11cfb43412b6bc2cada9da, NAME => 'GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:36,298 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=6963958e217f9174c82f3863a00f116d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:36,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:36,299 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150236298"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150236298"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150236298"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150236298"}]},"ts":"1690150236298"} 2023-07-23 22:10:36,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:36,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:36,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:36,302 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=94 2023-07-23 22:10:36,302 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=94, state=SUCCESS; OpenRegionProcedure 6963958e217f9174c82f3863a00f116d, server=jenkins-hbase4.apache.org,34191,1690150220233 in 172 msec 2023-07-23 22:10:36,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, REOPEN/MOVE in 524 msec 2023-07-23 22:10:36,307 INFO [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:36,308 DEBUG [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/f 2023-07-23 22:10:36,308 DEBUG [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/f 2023-07-23 22:10:36,308 INFO [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 039587bb8a11cfb43412b6bc2cada9da columnFamilyName f 2023-07-23 22:10:36,309 INFO [StoreOpener-039587bb8a11cfb43412b6bc2cada9da-1] regionserver.HStore(310): Store=039587bb8a11cfb43412b6bc2cada9da/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:36,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:36,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:36,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:36,315 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 039587bb8a11cfb43412b6bc2cada9da; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9409482880, jitterRate=-0.12367361783981323}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:36,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 039587bb8a11cfb43412b6bc2cada9da: 2023-07-23 22:10:36,316 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da., pid=98, masterSystemTime=1690150236279 2023-07-23 22:10:36,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:36,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:36,318 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=039587bb8a11cfb43412b6bc2cada9da, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:36,318 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150236318"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150236318"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150236318"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150236318"}]},"ts":"1690150236318"} 2023-07-23 22:10:36,321 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=95 2023-07-23 22:10:36,322 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=95, state=SUCCESS; OpenRegionProcedure 039587bb8a11cfb43412b6bc2cada9da, server=jenkins-hbase4.apache.org,34191,1690150220233 in 193 msec 2023-07-23 22:10:36,323 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, REOPEN/MOVE in 541 msec 2023-07-23 22:10:36,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure.ProcedureSyncWait(216): waitFor pid=94 2023-07-23 22:10:36,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1358604544. 2023-07-23 22:10:36,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:36,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:36,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:36,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-23 22:10:36,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:36,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-23 22:10:36,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:36,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:36,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:36,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1358604544 2023-07-23 22:10:36,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:36,810 INFO [Listener at localhost/42675] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-23 22:10:36,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-23 22:10:36,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 22:10:36,817 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150236817"}]},"ts":"1690150236817"} 2023-07-23 22:10:36,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-23 22:10:36,819 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-23 22:10:36,821 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-23 22:10:36,822 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, UNASSIGN}] 2023-07-23 22:10:36,825 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, UNASSIGN 2023-07-23 22:10:36,826 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=039587bb8a11cfb43412b6bc2cada9da, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:36,826 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150236826"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150236826"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150236826"}]},"ts":"1690150236826"} 2023-07-23 22:10:36,836 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; CloseRegionProcedure 039587bb8a11cfb43412b6bc2cada9da, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:36,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-23 22:10:36,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:36,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 039587bb8a11cfb43412b6bc2cada9da, disabling compactions & flushes 2023-07-23 22:10:36,991 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:36,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:36,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. after waiting 0 ms 2023-07-23 22:10:36,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:36,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:36,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da. 2023-07-23 22:10:36,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 039587bb8a11cfb43412b6bc2cada9da: 2023-07-23 22:10:37,001 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:37,002 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=039587bb8a11cfb43412b6bc2cada9da, regionState=CLOSED 2023-07-23 22:10:37,002 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150237002"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150237002"}]},"ts":"1690150237002"} 2023-07-23 22:10:37,011 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-23 22:10:37,011 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; CloseRegionProcedure 039587bb8a11cfb43412b6bc2cada9da, server=jenkins-hbase4.apache.org,34191,1690150220233 in 168 msec 2023-07-23 22:10:37,015 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-23 22:10:37,015 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=039587bb8a11cfb43412b6bc2cada9da, UNASSIGN in 189 msec 2023-07-23 22:10:37,016 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150237016"}]},"ts":"1690150237016"} 2023-07-23 22:10:37,019 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-23 22:10:37,022 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-23 22:10:37,024 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 212 msec 2023-07-23 22:10:37,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-23 22:10:37,122 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 100 completed 2023-07-23 22:10:37,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-23 22:10:37,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 22:10:37,126 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=103, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 22:10:37,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1358604544' 2023-07-23 22:10:37,128 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=103, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 22:10:37,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1358604544 2023-07-23 22:10:37,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:37,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:37,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:37,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-23 22:10:37,136 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:37,141 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/recovered.edits] 2023-07-23 22:10:37,152 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/recovered.edits/7.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da/recovered.edits/7.seqid 2023-07-23 22:10:37,153 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveA/039587bb8a11cfb43412b6bc2cada9da 2023-07-23 22:10:37,153 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-23 22:10:37,157 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=103, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 22:10:37,160 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-23 22:10:37,162 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-23 22:10:37,164 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=103, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 22:10:37,165 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-23 22:10:37,165 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150237165"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:37,167 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 22:10:37,167 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 039587bb8a11cfb43412b6bc2cada9da, NAME => 'GrouptestMultiTableMoveA,,1690150234516.039587bb8a11cfb43412b6bc2cada9da.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 22:10:37,167 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-23 22:10:37,167 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690150237167"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:37,169 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-23 22:10:37,172 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=103, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 22:10:37,174 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 49 msec 2023-07-23 22:10:37,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-23 22:10:37,237 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-23 22:10:37,237 INFO [Listener at localhost/42675] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-23 22:10:37,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-23 22:10:37,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 22:10:37,245 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150237245"}]},"ts":"1690150237245"} 2023-07-23 22:10:37,248 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-23 22:10:37,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-23 22:10:37,251 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-23 22:10:37,253 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, UNASSIGN}] 2023-07-23 22:10:37,256 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=105, ppid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, UNASSIGN 2023-07-23 22:10:37,257 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=6963958e217f9174c82f3863a00f116d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:37,257 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150237257"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150237257"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150237257"}]},"ts":"1690150237257"} 2023-07-23 22:10:37,259 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE; CloseRegionProcedure 6963958e217f9174c82f3863a00f116d, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:37,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-23 22:10:37,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:37,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6963958e217f9174c82f3863a00f116d, disabling compactions & flushes 2023-07-23 22:10:37,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:37,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:37,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. after waiting 0 ms 2023-07-23 22:10:37,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:37,417 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:37,417 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d. 2023-07-23 22:10:37,417 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6963958e217f9174c82f3863a00f116d: 2023-07-23 22:10:37,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:37,420 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=105 updating hbase:meta row=6963958e217f9174c82f3863a00f116d, regionState=CLOSED 2023-07-23 22:10:37,420 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690150237419"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150237419"}]},"ts":"1690150237419"} 2023-07-23 22:10:37,423 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-23 22:10:37,423 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; CloseRegionProcedure 6963958e217f9174c82f3863a00f116d, server=jenkins-hbase4.apache.org,34191,1690150220233 in 162 msec 2023-07-23 22:10:37,425 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-23 22:10:37,425 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=6963958e217f9174c82f3863a00f116d, UNASSIGN in 170 msec 2023-07-23 22:10:37,425 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150237425"}]},"ts":"1690150237425"} 2023-07-23 22:10:37,426 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-23 22:10:37,428 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-23 22:10:37,430 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 191 msec 2023-07-23 22:10:37,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-23 22:10:37,551 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 104 completed 2023-07-23 22:10:37,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-23 22:10:37,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 22:10:37,556 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=107, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 22:10:37,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1358604544' 2023-07-23 22:10:37,557 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=107, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 22:10:37,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1358604544 2023-07-23 22:10:37,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:37,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:37,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:37,562 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:37,564 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/recovered.edits] 2023-07-23 22:10:37,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-23 22:10:37,574 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/recovered.edits/7.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d/recovered.edits/7.seqid 2023-07-23 22:10:37,582 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/GrouptestMultiTableMoveB/6963958e217f9174c82f3863a00f116d 2023-07-23 22:10:37,582 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-23 22:10:37,595 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=107, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 22:10:37,597 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-23 22:10:37,600 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-23 22:10:37,602 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=107, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 22:10:37,602 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-23 22:10:37,602 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150237602"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:37,604 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 22:10:37,604 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6963958e217f9174c82f3863a00f116d, NAME => 'GrouptestMultiTableMoveB,,1690150235131.6963958e217f9174c82f3863a00f116d.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 22:10:37,604 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-23 22:10:37,604 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690150237604"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:37,605 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-23 22:10:37,608 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=107, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 22:10:37,610 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 56 msec 2023-07-23 22:10:37,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-23 22:10:37,671 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-23 22:10:37,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:37,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:37,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:37,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:37,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:37,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191] to rsgroup default 2023-07-23 22:10:37,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1358604544 2023-07-23 22:10:37,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:37,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:37,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:37,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1358604544, current retry=0 2023-07-23 22:10:37,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233] are moved back to Group_testMultiTableMove_1358604544 2023-07-23 22:10:37,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1358604544 => default 2023-07-23 22:10:37,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:37,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1358604544 2023-07-23 22:10:37,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:37,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:37,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 22:10:37,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:37,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:37,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:37,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:37,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:37,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:37,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:37,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:37,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:37,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:37,736 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:37,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:37,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:37,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:37,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:37,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:37,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:37,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:37,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:37,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:37,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 507 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151437753, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:37,754 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:37,756 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:37,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:37,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:37,758 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:37,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:37,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:37,786 INFO [Listener at localhost/42675] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=500 (was 499) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3ed81975-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1648874798_17 at /127.0.0.1:56722 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1648874798_17 at /127.0.0.1:58068 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3ed81975-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_295696383_17 at /127.0.0.1:56684 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=764 (was 774), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=478 (was 501), ProcessCount=176 (was 176), AvailableMemoryMB=5986 (was 6062) 2023-07-23 22:10:37,805 INFO [Listener at localhost/42675] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=500, OpenFileDescriptor=764, MaxFileDescriptor=60000, SystemLoadAverage=478, ProcessCount=176, AvailableMemoryMB=5984 2023-07-23 22:10:37,805 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-23 22:10:37,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:37,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:37,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:37,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:37,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:37,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:37,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:37,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:37,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:37,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:37,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:37,827 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:37,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:37,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:37,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:37,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:37,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:38,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:38,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:38,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 535 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151438092, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:38,093 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:38,095 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:38,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,096 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:38,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:38,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:38,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-23 22:10:38,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 22:10:38,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:38,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:38,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup oldGroup 2023-07-23 22:10:38,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 22:10:38,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:38,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 22:10:38,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039] are moved back to default 2023-07-23 22:10:38,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-23 22:10:38,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:38,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-23 22:10:38,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-23 22:10:38,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:38,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-23 22:10:38,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-23 22:10:38,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 22:10:38,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:38,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:38,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41457] to rsgroup anotherRSGroup 2023-07-23 22:10:38,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-23 22:10:38,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 22:10:38,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:38,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 22:10:38,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41457,1690150220404] are moved back to default 2023-07-23 22:10:38,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-23 22:10:38,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:38,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-23 22:10:38,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-23 22:10:38,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-23 22:10:38,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:38,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 569 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:53220 deadline: 1690151438186, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-23 22:10:38,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-23 22:10:38,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:38,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:53220 deadline: 1690151438189, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-23 22:10:38,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-23 22:10:38,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:38,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:53220 deadline: 1690151438190, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-23 22:10:38,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-23 22:10:38,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:38,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:53220 deadline: 1690151438192, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-23 22:10:38,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:38,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:38,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:38,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41457] to rsgroup default 2023-07-23 22:10:38,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-23 22:10:38,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 22:10:38,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:38,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-23 22:10:38,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41457,1690150220404] are moved back to anotherRSGroup 2023-07-23 22:10:38,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-23 22:10:38,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:38,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-23 22:10:38,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 22:10:38,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-23 22:10:38,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:38,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:38,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:38,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:38,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup default 2023-07-23 22:10:38,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 22:10:38,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:38,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-23 22:10:38,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039] are moved back to oldGroup 2023-07-23 22:10:38,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-23 22:10:38,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:38,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-23 22:10:38,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 22:10:38,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:38,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:38,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:38,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:38,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:38,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:38,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:38,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:38,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:38,239 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:38,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:38,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:38,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:38,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:38,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:38,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 611 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151438251, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:38,252 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:38,253 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:38,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,254 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:38,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:38,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,276 INFO [Listener at localhost/42675] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=503 (was 500) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=764 (was 764), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=478 (was 478), ProcessCount=176 (was 176), AvailableMemoryMB=5892 (was 5984) 2023-07-23 22:10:38,276 WARN [Listener at localhost/42675] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-23 22:10:38,294 INFO [Listener at localhost/42675] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=503, OpenFileDescriptor=764, MaxFileDescriptor=60000, SystemLoadAverage=478, ProcessCount=176, AvailableMemoryMB=5891 2023-07-23 22:10:38,294 WARN [Listener at localhost/42675] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-23 22:10:38,294 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-23 22:10:38,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:38,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:38,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:38,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:38,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:38,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:38,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:38,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:38,309 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:38,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:38,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:38,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:38,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:38,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:38,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 639 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151438318, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:38,319 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:38,320 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:38,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,321 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:38,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:38,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:38,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-23 22:10:38,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 22:10:38,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:38,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:38,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup oldgroup 2023-07-23 22:10:38,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 22:10:38,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:38,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 22:10:38,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039] are moved back to default 2023-07-23 22:10:38,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-23 22:10:38,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:38,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:38,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:38,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-23 22:10:38,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:38,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:38,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-23 22:10:38,351 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:38,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 108 2023-07-23 22:10:38,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-23 22:10:38,353 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 22:10:38,353 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,353 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,353 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:38,355 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:38,357 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/testRename/f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:38,358 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/testRename/f992c8675ecb32048f91b79a022c662a empty. 2023-07-23 22:10:38,358 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/testRename/f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:38,358 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-23 22:10:38,381 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:38,383 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => f992c8675ecb32048f91b79a022c662a, NAME => 'testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:38,394 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:38,395 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing f992c8675ecb32048f91b79a022c662a, disabling compactions & flushes 2023-07-23 22:10:38,395 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:38,395 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:38,395 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. after waiting 0 ms 2023-07-23 22:10:38,395 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:38,395 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:38,395 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for f992c8675ecb32048f91b79a022c662a: 2023-07-23 22:10:38,397 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:38,398 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150238398"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150238398"}]},"ts":"1690150238398"} 2023-07-23 22:10:38,400 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:38,400 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:38,401 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150238401"}]},"ts":"1690150238401"} 2023-07-23 22:10:38,402 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-23 22:10:38,406 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:38,406 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:38,406 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:38,406 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:38,409 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, ASSIGN}] 2023-07-23 22:10:38,410 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, ASSIGN 2023-07-23 22:10:38,411 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=109, ppid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:38,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-23 22:10:38,561 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:38,563 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:38,563 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150238563"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150238563"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150238563"}]},"ts":"1690150238563"} 2023-07-23 22:10:38,564 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE; OpenRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:38,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-23 22:10:38,720 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:38,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f992c8675ecb32048f91b79a022c662a, NAME => 'testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:38,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:38,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:38,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:38,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:38,721 INFO [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:38,723 DEBUG [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/tr 2023-07-23 22:10:38,723 DEBUG [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/tr 2023-07-23 22:10:38,723 INFO [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f992c8675ecb32048f91b79a022c662a columnFamilyName tr 2023-07-23 22:10:38,724 INFO [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] regionserver.HStore(310): Store=f992c8675ecb32048f91b79a022c662a/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:38,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:38,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:38,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:38,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:38,730 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f992c8675ecb32048f91b79a022c662a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9740220960, jitterRate=-0.09287123382091522}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:38,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f992c8675ecb32048f91b79a022c662a: 2023-07-23 22:10:38,731 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a., pid=110, masterSystemTime=1690150238716 2023-07-23 22:10:38,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:38,733 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:38,733 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=109 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:38,734 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150238733"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150238733"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150238733"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150238733"}]},"ts":"1690150238733"} 2023-07-23 22:10:38,737 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-23 22:10:38,737 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; OpenRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,41457,1690150220404 in 171 msec 2023-07-23 22:10:38,738 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-23 22:10:38,739 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, ASSIGN in 330 msec 2023-07-23 22:10:38,739 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:38,739 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150238739"}]},"ts":"1690150238739"} 2023-07-23 22:10:38,741 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-23 22:10:38,744 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=108, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:38,746 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; CreateTableProcedure table=testRename in 396 msec 2023-07-23 22:10:38,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=108 2023-07-23 22:10:38,955 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 108 completed 2023-07-23 22:10:38,956 DEBUG [Listener at localhost/42675] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-23 22:10:38,956 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:38,959 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-23 22:10:38,960 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:38,960 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-23 22:10:38,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-23 22:10:38,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 22:10:38,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:38,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:38,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:38,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-23 22:10:38,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region f992c8675ecb32048f91b79a022c662a to RSGroup oldgroup 2023-07-23 22:10:38,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:38,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:38,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:38,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:38,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:38,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, REOPEN/MOVE 2023-07-23 22:10:38,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-23 22:10:38,969 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, REOPEN/MOVE 2023-07-23 22:10:38,970 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:38,970 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150238969"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150238969"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150238969"}]},"ts":"1690150238969"} 2023-07-23 22:10:38,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:39,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:39,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f992c8675ecb32048f91b79a022c662a, disabling compactions & flushes 2023-07-23 22:10:39,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:39,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:39,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. after waiting 0 ms 2023-07-23 22:10:39,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:39,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:39,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:39,130 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f992c8675ecb32048f91b79a022c662a: 2023-07-23 22:10:39,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f992c8675ecb32048f91b79a022c662a move to jenkins-hbase4.apache.org,34191,1690150220233 record at close sequenceid=2 2023-07-23 22:10:39,131 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:39,132 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=CLOSED 2023-07-23 22:10:39,132 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150239132"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150239132"}]},"ts":"1690150239132"} 2023-07-23 22:10:39,134 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-23 22:10:39,134 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,41457,1690150220404 in 162 msec 2023-07-23 22:10:39,135 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34191,1690150220233; forceNewPlan=false, retain=false 2023-07-23 22:10:39,285 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:39,286 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:39,286 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150239286"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150239286"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150239286"}]},"ts":"1690150239286"} 2023-07-23 22:10:39,288 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=111, state=RUNNABLE; OpenRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:39,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:39,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f992c8675ecb32048f91b79a022c662a, NAME => 'testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:39,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:39,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:39,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:39,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:39,446 INFO [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:39,447 DEBUG [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/tr 2023-07-23 22:10:39,447 DEBUG [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/tr 2023-07-23 22:10:39,447 INFO [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f992c8675ecb32048f91b79a022c662a columnFamilyName tr 2023-07-23 22:10:39,448 INFO [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] regionserver.HStore(310): Store=f992c8675ecb32048f91b79a022c662a/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:39,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:39,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:39,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:39,453 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f992c8675ecb32048f91b79a022c662a; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11011007840, jitterRate=0.025480017066001892}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:39,453 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f992c8675ecb32048f91b79a022c662a: 2023-07-23 22:10:39,454 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a., pid=113, masterSystemTime=1690150239439 2023-07-23 22:10:39,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:39,455 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:39,456 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:39,456 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150239456"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150239456"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150239456"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150239456"}]},"ts":"1690150239456"} 2023-07-23 22:10:39,459 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=111 2023-07-23 22:10:39,459 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=111, state=SUCCESS; OpenRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,34191,1690150220233 in 170 msec 2023-07-23 22:10:39,460 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, REOPEN/MOVE in 491 msec 2023-07-23 22:10:39,600 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 22:10:39,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure.ProcedureSyncWait(216): waitFor pid=111 2023-07-23 22:10:39,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-23 22:10:39,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:39,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:39,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:39,975 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:39,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-23 22:10:39,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:39,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-23 22:10:39,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:39,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-23 22:10:39,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:39,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:39,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:39,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-23 22:10:39,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 22:10:39,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 22:10:39,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:39,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:39,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:39,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:39,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:39,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:39,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41457] to rsgroup normal 2023-07-23 22:10:39,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 22:10:39,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 22:10:39,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:39,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:39,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:39,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 22:10:39,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41457,1690150220404] are moved back to default 2023-07-23 22:10:39,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-23 22:10:39,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:39,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:39,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:40,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-23 22:10:40,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:40,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:40,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-23 22:10:40,006 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:40,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 114 2023-07-23 22:10:40,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-23 22:10:40,008 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 22:10:40,008 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 22:10:40,009 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:40,009 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:40,009 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:40,015 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:40,016 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,017 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179 empty. 2023-07-23 22:10:40,017 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,017 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-23 22:10:40,031 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:40,032 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4f43d3d673ff6f774d9a1575bc603179, NAME => 'unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:40,043 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:40,043 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 4f43d3d673ff6f774d9a1575bc603179, disabling compactions & flushes 2023-07-23 22:10:40,043 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,043 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,043 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. after waiting 0 ms 2023-07-23 22:10:40,043 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,043 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,043 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 4f43d3d673ff6f774d9a1575bc603179: 2023-07-23 22:10:40,045 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:40,046 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150240046"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150240046"}]},"ts":"1690150240046"} 2023-07-23 22:10:40,048 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:40,048 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:40,049 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150240049"}]},"ts":"1690150240049"} 2023-07-23 22:10:40,050 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-23 22:10:40,053 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, ASSIGN}] 2023-07-23 22:10:40,055 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, ASSIGN 2023-07-23 22:10:40,055 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:40,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-23 22:10:40,207 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:40,207 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150240207"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150240207"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150240207"}]},"ts":"1690150240207"} 2023-07-23 22:10:40,209 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:40,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-23 22:10:40,364 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4f43d3d673ff6f774d9a1575bc603179, NAME => 'unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:40,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:40,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,366 INFO [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,367 DEBUG [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/ut 2023-07-23 22:10:40,368 DEBUG [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/ut 2023-07-23 22:10:40,368 INFO [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4f43d3d673ff6f774d9a1575bc603179 columnFamilyName ut 2023-07-23 22:10:40,369 INFO [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] regionserver.HStore(310): Store=4f43d3d673ff6f774d9a1575bc603179/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:40,369 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,370 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,374 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-23 22:10:40,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:40,375 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4f43d3d673ff6f774d9a1575bc603179; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9424872000, jitterRate=-0.12224039435386658}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:40,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4f43d3d673ff6f774d9a1575bc603179: 2023-07-23 22:10:40,376 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179., pid=116, masterSystemTime=1690150240360 2023-07-23 22:10:40,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,378 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,378 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:40,378 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150240378"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150240378"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150240378"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150240378"}]},"ts":"1690150240378"} 2023-07-23 22:10:40,381 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-23 22:10:40,381 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,46085,1690150220016 in 171 msec 2023-07-23 22:10:40,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-23 22:10:40,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, ASSIGN in 328 msec 2023-07-23 22:10:40,383 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:40,383 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150240383"}]},"ts":"1690150240383"} 2023-07-23 22:10:40,384 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-23 22:10:40,386 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:40,388 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=unmovedTable in 384 msec 2023-07-23 22:10:40,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-23 22:10:40,610 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 114 completed 2023-07-23 22:10:40,610 DEBUG [Listener at localhost/42675] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-23 22:10:40,610 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:40,614 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-23 22:10:40,615 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:40,615 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-23 22:10:40,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-23 22:10:40,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 22:10:40,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 22:10:40,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:40,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:40,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:40,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-23 22:10:40,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region 4f43d3d673ff6f774d9a1575bc603179 to RSGroup normal 2023-07-23 22:10:40,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, REOPEN/MOVE 2023-07-23 22:10:40,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-23 22:10:40,635 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, REOPEN/MOVE 2023-07-23 22:10:40,636 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:40,636 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150240636"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150240636"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150240636"}]},"ts":"1690150240636"} 2023-07-23 22:10:40,637 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:40,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4f43d3d673ff6f774d9a1575bc603179, disabling compactions & flushes 2023-07-23 22:10:40,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. after waiting 0 ms 2023-07-23 22:10:40,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,796 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:40,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:40,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4f43d3d673ff6f774d9a1575bc603179: 2023-07-23 22:10:40,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4f43d3d673ff6f774d9a1575bc603179 move to jenkins-hbase4.apache.org,41457,1690150220404 record at close sequenceid=2 2023-07-23 22:10:40,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:40,799 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=CLOSED 2023-07-23 22:10:40,799 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150240799"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150240799"}]},"ts":"1690150240799"} 2023-07-23 22:10:40,802 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-23 22:10:40,802 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,46085,1690150220016 in 163 msec 2023-07-23 22:10:40,802 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:40,953 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:40,953 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150240953"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150240953"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150240953"}]},"ts":"1690150240953"} 2023-07-23 22:10:40,955 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:41,111 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:41,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4f43d3d673ff6f774d9a1575bc603179, NAME => 'unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:41,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:41,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:41,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:41,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:41,113 INFO [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:41,114 DEBUG [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/ut 2023-07-23 22:10:41,114 DEBUG [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/ut 2023-07-23 22:10:41,114 INFO [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4f43d3d673ff6f774d9a1575bc603179 columnFamilyName ut 2023-07-23 22:10:41,115 INFO [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] regionserver.HStore(310): Store=4f43d3d673ff6f774d9a1575bc603179/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:41,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:41,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:41,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:41,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4f43d3d673ff6f774d9a1575bc603179; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11364625120, jitterRate=0.05841319262981415}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:41,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4f43d3d673ff6f774d9a1575bc603179: 2023-07-23 22:10:41,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179., pid=119, masterSystemTime=1690150241106 2023-07-23 22:10:41,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:41,123 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:41,123 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:41,123 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150241123"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150241123"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150241123"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150241123"}]},"ts":"1690150241123"} 2023-07-23 22:10:41,126 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-23 22:10:41,126 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,41457,1690150220404 in 170 msec 2023-07-23 22:10:41,127 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, REOPEN/MOVE in 501 msec 2023-07-23 22:10:41,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-23 22:10:41,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-23 22:10:41,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:41,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:41,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:41,643 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:41,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-23 22:10:41,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:41,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-23 22:10:41,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:41,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-23 22:10:41,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:41,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-23 22:10:41,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 22:10:41,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:41,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:41,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 22:10:41,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-23 22:10:41,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-23 22:10:41,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:41,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:41,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-23 22:10:41,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:41,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-23 22:10:41,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:41,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-23 22:10:41,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:41,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:41,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:41,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-23 22:10:41,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 22:10:41,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:41,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:41,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 22:10:41,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:41,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-23 22:10:41,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region 4f43d3d673ff6f774d9a1575bc603179 to RSGroup default 2023-07-23 22:10:41,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, REOPEN/MOVE 2023-07-23 22:10:41,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 22:10:41,682 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, REOPEN/MOVE 2023-07-23 22:10:41,682 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:41,683 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150241682"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150241682"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150241682"}]},"ts":"1690150241682"} 2023-07-23 22:10:41,684 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:41,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:41,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4f43d3d673ff6f774d9a1575bc603179, disabling compactions & flushes 2023-07-23 22:10:41,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:41,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:41,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. after waiting 0 ms 2023-07-23 22:10:41,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:41,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:41,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:41,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4f43d3d673ff6f774d9a1575bc603179: 2023-07-23 22:10:41,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4f43d3d673ff6f774d9a1575bc603179 move to jenkins-hbase4.apache.org,46085,1690150220016 record at close sequenceid=5 2023-07-23 22:10:41,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:41,845 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=CLOSED 2023-07-23 22:10:41,845 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150241845"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150241845"}]},"ts":"1690150241845"} 2023-07-23 22:10:41,848 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-23 22:10:41,848 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,41457,1690150220404 in 163 msec 2023-07-23 22:10:41,849 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:41,999 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:41,999 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150241999"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150241999"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150241999"}]},"ts":"1690150241999"} 2023-07-23 22:10:42,001 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:42,156 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:42,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4f43d3d673ff6f774d9a1575bc603179, NAME => 'unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:42,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:42,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:42,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:42,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:42,159 INFO [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:42,160 DEBUG [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/ut 2023-07-23 22:10:42,160 DEBUG [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/ut 2023-07-23 22:10:42,160 INFO [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4f43d3d673ff6f774d9a1575bc603179 columnFamilyName ut 2023-07-23 22:10:42,161 INFO [StoreOpener-4f43d3d673ff6f774d9a1575bc603179-1] regionserver.HStore(310): Store=4f43d3d673ff6f774d9a1575bc603179/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:42,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:42,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:42,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:42,167 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4f43d3d673ff6f774d9a1575bc603179; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11558960640, jitterRate=0.07651209831237793}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:42,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4f43d3d673ff6f774d9a1575bc603179: 2023-07-23 22:10:42,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179., pid=122, masterSystemTime=1690150242153 2023-07-23 22:10:42,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:42,169 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:42,169 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=4f43d3d673ff6f774d9a1575bc603179, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:42,170 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690150242169"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150242169"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150242169"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150242169"}]},"ts":"1690150242169"} 2023-07-23 22:10:42,172 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-23 22:10:42,172 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 4f43d3d673ff6f774d9a1575bc603179, server=jenkins-hbase4.apache.org,46085,1690150220016 in 170 msec 2023-07-23 22:10:42,173 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=4f43d3d673ff6f774d9a1575bc603179, REOPEN/MOVE in 491 msec 2023-07-23 22:10:42,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-23 22:10:42,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-23 22:10:42,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:42,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41457] to rsgroup default 2023-07-23 22:10:42,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 22:10:42,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:42,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:42,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 22:10:42,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:42,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-23 22:10:42,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41457,1690150220404] are moved back to normal 2023-07-23 22:10:42,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-23 22:10:42,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:42,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-23 22:10:42,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:42,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:42,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 22:10:42,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-23 22:10:42,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:42,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:42,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:42,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:42,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:42,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:42,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:42,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:42,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 22:10:42,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 22:10:42,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:42,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-23 22:10:42,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:42,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 22:10:42,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:42,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-23 22:10:42,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(345): Moving region f992c8675ecb32048f91b79a022c662a to RSGroup default 2023-07-23 22:10:42,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, REOPEN/MOVE 2023-07-23 22:10:42,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 22:10:42,722 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, REOPEN/MOVE 2023-07-23 22:10:42,723 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:42,723 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150242723"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150242723"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150242723"}]},"ts":"1690150242723"} 2023-07-23 22:10:42,724 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,34191,1690150220233}] 2023-07-23 22:10:42,877 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:42,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f992c8675ecb32048f91b79a022c662a, disabling compactions & flushes 2023-07-23 22:10:42,878 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:42,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:42,879 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. after waiting 0 ms 2023-07-23 22:10:42,879 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:42,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 22:10:42,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:42,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f992c8675ecb32048f91b79a022c662a: 2023-07-23 22:10:42,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f992c8675ecb32048f91b79a022c662a move to jenkins-hbase4.apache.org,41457,1690150220404 record at close sequenceid=5 2023-07-23 22:10:42,886 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:42,887 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=CLOSED 2023-07-23 22:10:42,887 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150242887"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150242887"}]},"ts":"1690150242887"} 2023-07-23 22:10:42,890 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-23 22:10:42,890 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,34191,1690150220233 in 164 msec 2023-07-23 22:10:42,890 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:43,041 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:43,041 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:43,041 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150243041"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150243041"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150243041"}]},"ts":"1690150243041"} 2023-07-23 22:10:43,043 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:43,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:43,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f992c8675ecb32048f91b79a022c662a, NAME => 'testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:43,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:43,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:43,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:43,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:43,200 INFO [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:43,201 DEBUG [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/tr 2023-07-23 22:10:43,201 DEBUG [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/tr 2023-07-23 22:10:43,202 INFO [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f992c8675ecb32048f91b79a022c662a columnFamilyName tr 2023-07-23 22:10:43,202 INFO [StoreOpener-f992c8675ecb32048f91b79a022c662a-1] regionserver.HStore(310): Store=f992c8675ecb32048f91b79a022c662a/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:43,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:43,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:43,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:43,207 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f992c8675ecb32048f91b79a022c662a; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11784100320, jitterRate=0.09747986495494843}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:43,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f992c8675ecb32048f91b79a022c662a: 2023-07-23 22:10:43,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a., pid=125, masterSystemTime=1690150243194 2023-07-23 22:10:43,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:43,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:43,210 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=f992c8675ecb32048f91b79a022c662a, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:43,210 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690150243210"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150243210"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150243210"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150243210"}]},"ts":"1690150243210"} 2023-07-23 22:10:43,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-23 22:10:43,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure f992c8675ecb32048f91b79a022c662a, server=jenkins-hbase4.apache.org,41457,1690150220404 in 168 msec 2023-07-23 22:10:43,214 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f992c8675ecb32048f91b79a022c662a, REOPEN/MOVE in 492 msec 2023-07-23 22:10:43,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-23 22:10:43,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-23 22:10:43,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:43,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup default 2023-07-23 22:10:43,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 22:10:43,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:43,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-23 22:10:43,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039] are moved back to newgroup 2023-07-23 22:10:43,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-23 22:10:43,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:43,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-23 22:10:43,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:43,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:43,740 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:43,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:43,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:43,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:43,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:43,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:43,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:43,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 759 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151443756, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:43,757 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:43,759 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:43,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,760 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:43,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:43,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:43,778 INFO [Listener at localhost/42675] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=495 (was 503), OpenFileDescriptor=747 (was 764), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=448 (was 478), ProcessCount=176 (was 176), AvailableMemoryMB=5791 (was 5891) 2023-07-23 22:10:43,796 INFO [Listener at localhost/42675] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=495, OpenFileDescriptor=747, MaxFileDescriptor=60000, SystemLoadAverage=448, ProcessCount=176, AvailableMemoryMB=5791 2023-07-23 22:10:43,796 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-23 22:10:43,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:43,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:43,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:43,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:43,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:43,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:43,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:43,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:43,809 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:43,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:43,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:43,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:43,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:43,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:43,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:43,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 787 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151443820, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:43,820 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:43,822 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:43,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,823 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:43,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:43,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:43,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-23 22:10:43,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:43,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-23 22:10:43,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-23 22:10:43,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-23 22:10:43,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:43,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-23 22:10:43,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:43,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 799 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:53220 deadline: 1690151443831, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-23 22:10:43,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-23 22:10:43,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:43,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:53220 deadline: 1690151443833, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-23 22:10:43,836 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-23 22:10:43,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-23 22:10:43,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-23 22:10:43,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:43,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:53220 deadline: 1690151443841, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-23 22:10:43,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:43,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:43,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:43,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:43,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:43,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:43,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:43,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:43,856 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:43,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:43,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:43,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:43,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:43,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:43,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:43,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 830 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151443866, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:43,869 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:43,871 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:43,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,872 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:43,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:43,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:43,888 INFO [Listener at localhost/42675] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=499 (was 495) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x126adeaf-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=747 (was 747), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=448 (was 448), ProcessCount=176 (was 176), AvailableMemoryMB=5791 (was 5791) 2023-07-23 22:10:43,903 INFO [Listener at localhost/42675] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=499, OpenFileDescriptor=747, MaxFileDescriptor=60000, SystemLoadAverage=448, ProcessCount=176, AvailableMemoryMB=5791 2023-07-23 22:10:43,903 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-23 22:10:43,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:43,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:43,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:43,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:43,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:43,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:43,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:43,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:43,916 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:43,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:43,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:43,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:43,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:43,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:43,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:43,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 858 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151443926, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:43,927 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:43,928 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:43,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,929 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:43,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:43,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:43,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:43,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:43,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1498307518 2023-07-23 22:10:43,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498307518 2023-07-23 22:10:43,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:43,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:43,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:43,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup Group_testDisabledTableMove_1498307518 2023-07-23 22:10:43,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498307518 2023-07-23 22:10:43,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:43,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:43,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 22:10:43,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039] are moved back to default 2023-07-23 22:10:43,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1498307518 2023-07-23 22:10:43,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:43,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:43,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:43,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1498307518 2023-07-23 22:10:43,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:43,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:43,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-23 22:10:43,959 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:43,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 126 2023-07-23 22:10:43,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-23 22:10:43,961 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:43,961 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498307518 2023-07-23 22:10:43,962 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:43,962 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:43,964 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:43,967 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:43,968 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:43,968 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:43,968 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:43,967 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:43,968 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4 empty. 2023-07-23 22:10:43,968 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7 empty. 2023-07-23 22:10:43,968 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8 empty. 2023-07-23 22:10:43,968 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929 empty. 2023-07-23 22:10:43,969 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25 empty. 2023-07-23 22:10:43,969 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:43,969 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:43,969 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:43,969 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:43,969 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:43,969 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-23 22:10:43,985 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:43,987 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7e56e4043d461d05a3247e0e750a1ec4, NAME => 'Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:43,992 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => fbe85498bd6cfea3f60711efb41a9da8, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:43,993 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1b39f128dc522d59fd1f5c1cd5aecf25, NAME => 'Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:44,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-23 22:10:44,069 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,069 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 1b39f128dc522d59fd1f5c1cd5aecf25, disabling compactions & flushes 2023-07-23 22:10:44,069 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,069 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,069 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. after waiting 0 ms 2023-07-23 22:10:44,069 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,069 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,070 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 1b39f128dc522d59fd1f5c1cd5aecf25: 2023-07-23 22:10:44,070 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => be0bdb29001dbe7412a1304704b39ad7, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:44,070 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,070 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing fbe85498bd6cfea3f60711efb41a9da8, disabling compactions & flushes 2023-07-23 22:10:44,070 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,070 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,070 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. after waiting 0 ms 2023-07-23 22:10:44,070 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,070 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,071 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for fbe85498bd6cfea3f60711efb41a9da8: 2023-07-23 22:10:44,071 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => e65ad33f4e5fdd596a9f7281b5206929, NAME => 'Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp 2023-07-23 22:10:44,071 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,071 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 7e56e4043d461d05a3247e0e750a1ec4, disabling compactions & flushes 2023-07-23 22:10:44,071 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,071 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,071 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. after waiting 0 ms 2023-07-23 22:10:44,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,072 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 7e56e4043d461d05a3247e0e750a1ec4: 2023-07-23 22:10:44,093 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,093 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing be0bdb29001dbe7412a1304704b39ad7, disabling compactions & flushes 2023-07-23 22:10:44,093 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,093 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,093 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. after waiting 0 ms 2023-07-23 22:10:44,093 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,093 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,093 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for be0bdb29001dbe7412a1304704b39ad7: 2023-07-23 22:10:44,094 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,094 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing e65ad33f4e5fdd596a9f7281b5206929, disabling compactions & flushes 2023-07-23 22:10:44,094 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,094 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,094 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. after waiting 0 ms 2023-07-23 22:10:44,094 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,094 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,094 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for e65ad33f4e5fdd596a9f7281b5206929: 2023-07-23 22:10:44,097 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:44,098 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244097"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244097"}]},"ts":"1690150244097"} 2023-07-23 22:10:44,098 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244097"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244097"}]},"ts":"1690150244097"} 2023-07-23 22:10:44,098 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244097"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244097"}]},"ts":"1690150244097"} 2023-07-23 22:10:44,098 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244097"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244097"}]},"ts":"1690150244097"} 2023-07-23 22:10:44,098 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244097"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244097"}]},"ts":"1690150244097"} 2023-07-23 22:10:44,100 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-23 22:10:44,101 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:44,101 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150244101"}]},"ts":"1690150244101"} 2023-07-23 22:10:44,102 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-23 22:10:44,106 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:44,106 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:44,106 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:44,106 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:44,106 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e56e4043d461d05a3247e0e750a1ec4, ASSIGN}, {pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b39f128dc522d59fd1f5c1cd5aecf25, ASSIGN}, {pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fbe85498bd6cfea3f60711efb41a9da8, ASSIGN}, {pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be0bdb29001dbe7412a1304704b39ad7, ASSIGN}, {pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e65ad33f4e5fdd596a9f7281b5206929, ASSIGN}] 2023-07-23 22:10:44,108 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e56e4043d461d05a3247e0e750a1ec4, ASSIGN 2023-07-23 22:10:44,108 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b39f128dc522d59fd1f5c1cd5aecf25, ASSIGN 2023-07-23 22:10:44,108 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e65ad33f4e5fdd596a9f7281b5206929, ASSIGN 2023-07-23 22:10:44,108 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be0bdb29001dbe7412a1304704b39ad7, ASSIGN 2023-07-23 22:10:44,109 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fbe85498bd6cfea3f60711efb41a9da8, ASSIGN 2023-07-23 22:10:44,109 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e65ad33f4e5fdd596a9f7281b5206929, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:44,109 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be0bdb29001dbe7412a1304704b39ad7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:44,109 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e56e4043d461d05a3247e0e750a1ec4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:44,109 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b39f128dc522d59fd1f5c1cd5aecf25, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41457,1690150220404; forceNewPlan=false, retain=false 2023-07-23 22:10:44,110 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fbe85498bd6cfea3f60711efb41a9da8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46085,1690150220016; forceNewPlan=false, retain=false 2023-07-23 22:10:44,259 INFO [jenkins-hbase4:37045] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 22:10:44,264 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=1b39f128dc522d59fd1f5c1cd5aecf25, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:44,264 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=7e56e4043d461d05a3247e0e750a1ec4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:44,264 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244263"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244263"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244263"}]},"ts":"1690150244263"} 2023-07-23 22:10:44,264 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=e65ad33f4e5fdd596a9f7281b5206929, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:44,264 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=fbe85498bd6cfea3f60711efb41a9da8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:44,264 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244263"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244263"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244263"}]},"ts":"1690150244263"} 2023-07-23 22:10:44,264 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244263"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244263"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244263"}]},"ts":"1690150244263"} 2023-07-23 22:10:44,264 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=be0bdb29001dbe7412a1304704b39ad7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:44,264 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244263"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244263"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244263"}]},"ts":"1690150244263"} 2023-07-23 22:10:44,265 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244263"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244263"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244263"}]},"ts":"1690150244263"} 2023-07-23 22:10:44,266 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=128, state=RUNNABLE; OpenRegionProcedure 1b39f128dc522d59fd1f5c1cd5aecf25, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:44,267 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=131, state=RUNNABLE; OpenRegionProcedure e65ad33f4e5fdd596a9f7281b5206929, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:44,269 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=129, state=RUNNABLE; OpenRegionProcedure fbe85498bd6cfea3f60711efb41a9da8, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:44,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-23 22:10:44,271 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=127, state=RUNNABLE; OpenRegionProcedure 7e56e4043d461d05a3247e0e750a1ec4, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:44,273 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=130, state=RUNNABLE; OpenRegionProcedure be0bdb29001dbe7412a1304704b39ad7, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:44,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e65ad33f4e5fdd596a9f7281b5206929, NAME => 'Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 22:10:44,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,426 INFO [StoreOpener-e65ad33f4e5fdd596a9f7281b5206929-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,426 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fbe85498bd6cfea3f60711efb41a9da8, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 22:10:44,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,428 DEBUG [StoreOpener-e65ad33f4e5fdd596a9f7281b5206929-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929/f 2023-07-23 22:10:44,428 DEBUG [StoreOpener-e65ad33f4e5fdd596a9f7281b5206929-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929/f 2023-07-23 22:10:44,428 INFO [StoreOpener-e65ad33f4e5fdd596a9f7281b5206929-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e65ad33f4e5fdd596a9f7281b5206929 columnFamilyName f 2023-07-23 22:10:44,429 INFO [StoreOpener-e65ad33f4e5fdd596a9f7281b5206929-1] regionserver.HStore(310): Store=e65ad33f4e5fdd596a9f7281b5206929/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:44,433 INFO [StoreOpener-fbe85498bd6cfea3f60711efb41a9da8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,435 DEBUG [StoreOpener-fbe85498bd6cfea3f60711efb41a9da8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8/f 2023-07-23 22:10:44,435 DEBUG [StoreOpener-fbe85498bd6cfea3f60711efb41a9da8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8/f 2023-07-23 22:10:44,436 INFO [StoreOpener-fbe85498bd6cfea3f60711efb41a9da8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fbe85498bd6cfea3f60711efb41a9da8 columnFamilyName f 2023-07-23 22:10:44,436 INFO [StoreOpener-fbe85498bd6cfea3f60711efb41a9da8-1] regionserver.HStore(310): Store=fbe85498bd6cfea3f60711efb41a9da8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:44,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:44,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e65ad33f4e5fdd596a9f7281b5206929; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10115804800, jitterRate=-0.05789226293563843}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:44,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e65ad33f4e5fdd596a9f7281b5206929: 2023-07-23 22:10:44,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,442 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929., pid=133, masterSystemTime=1690150244418 2023-07-23 22:10:44,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7e56e4043d461d05a3247e0e750a1ec4, NAME => 'Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 22:10:44,444 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=e65ad33f4e5fdd596a9f7281b5206929, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:44,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,444 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150244444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150244444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150244444"}]},"ts":"1690150244444"} 2023-07-23 22:10:44,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:44,445 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fbe85498bd6cfea3f60711efb41a9da8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10176185600, jitterRate=-0.0522688627243042}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:44,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fbe85498bd6cfea3f60711efb41a9da8: 2023-07-23 22:10:44,446 INFO [StoreOpener-7e56e4043d461d05a3247e0e750a1ec4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,446 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8., pid=134, masterSystemTime=1690150244422 2023-07-23 22:10:44,447 DEBUG [StoreOpener-7e56e4043d461d05a3247e0e750a1ec4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4/f 2023-07-23 22:10:44,448 DEBUG [StoreOpener-7e56e4043d461d05a3247e0e750a1ec4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4/f 2023-07-23 22:10:44,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,448 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,448 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=131 2023-07-23 22:10:44,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be0bdb29001dbe7412a1304704b39ad7, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 22:10:44,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=131, state=SUCCESS; OpenRegionProcedure e65ad33f4e5fdd596a9f7281b5206929, server=jenkins-hbase4.apache.org,41457,1690150220404 in 179 msec 2023-07-23 22:10:44,448 INFO [StoreOpener-7e56e4043d461d05a3247e0e750a1ec4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7e56e4043d461d05a3247e0e750a1ec4 columnFamilyName f 2023-07-23 22:10:44,448 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=fbe85498bd6cfea3f60711efb41a9da8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:44,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,448 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244448"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150244448"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150244448"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150244448"}]},"ts":"1690150244448"} 2023-07-23 22:10:44,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,449 INFO [StoreOpener-7e56e4043d461d05a3247e0e750a1ec4-1] regionserver.HStore(310): Store=7e56e4043d461d05a3247e0e750a1ec4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:44,450 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e65ad33f4e5fdd596a9f7281b5206929, ASSIGN in 342 msec 2023-07-23 22:10:44,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,450 INFO [StoreOpener-be0bdb29001dbe7412a1304704b39ad7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,452 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=129 2023-07-23 22:10:44,452 DEBUG [StoreOpener-be0bdb29001dbe7412a1304704b39ad7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7/f 2023-07-23 22:10:44,452 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; OpenRegionProcedure fbe85498bd6cfea3f60711efb41a9da8, server=jenkins-hbase4.apache.org,46085,1690150220016 in 181 msec 2023-07-23 22:10:44,452 DEBUG [StoreOpener-be0bdb29001dbe7412a1304704b39ad7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7/f 2023-07-23 22:10:44,452 INFO [StoreOpener-be0bdb29001dbe7412a1304704b39ad7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be0bdb29001dbe7412a1304704b39ad7 columnFamilyName f 2023-07-23 22:10:44,453 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fbe85498bd6cfea3f60711efb41a9da8, ASSIGN in 346 msec 2023-07-23 22:10:44,453 INFO [StoreOpener-be0bdb29001dbe7412a1304704b39ad7-1] regionserver.HStore(310): Store=be0bdb29001dbe7412a1304704b39ad7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:44,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:44,456 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7e56e4043d461d05a3247e0e750a1ec4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10810009920, jitterRate=0.006760627031326294}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:44,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7e56e4043d461d05a3247e0e750a1ec4: 2023-07-23 22:10:44,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,457 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4., pid=135, masterSystemTime=1690150244418 2023-07-23 22:10:44,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,458 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,458 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1b39f128dc522d59fd1f5c1cd5aecf25, NAME => 'Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 22:10:44,458 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=7e56e4043d461d05a3247e0e750a1ec4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:44,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:44,459 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244458"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150244458"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150244458"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150244458"}]},"ts":"1690150244458"} 2023-07-23 22:10:44,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,459 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:44,460 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be0bdb29001dbe7412a1304704b39ad7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11047477440, jitterRate=0.028876513242721558}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:44,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be0bdb29001dbe7412a1304704b39ad7: 2023-07-23 22:10:44,460 INFO [StoreOpener-1b39f128dc522d59fd1f5c1cd5aecf25-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,460 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7., pid=136, masterSystemTime=1690150244422 2023-07-23 22:10:44,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,461 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,461 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=127 2023-07-23 22:10:44,462 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=127, state=SUCCESS; OpenRegionProcedure 7e56e4043d461d05a3247e0e750a1ec4, server=jenkins-hbase4.apache.org,41457,1690150220404 in 189 msec 2023-07-23 22:10:44,462 DEBUG [StoreOpener-1b39f128dc522d59fd1f5c1cd5aecf25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25/f 2023-07-23 22:10:44,462 DEBUG [StoreOpener-1b39f128dc522d59fd1f5c1cd5aecf25-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25/f 2023-07-23 22:10:44,462 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=be0bdb29001dbe7412a1304704b39ad7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:44,462 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244462"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150244462"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150244462"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150244462"}]},"ts":"1690150244462"} 2023-07-23 22:10:44,462 INFO [StoreOpener-1b39f128dc522d59fd1f5c1cd5aecf25-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1b39f128dc522d59fd1f5c1cd5aecf25 columnFamilyName f 2023-07-23 22:10:44,463 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e56e4043d461d05a3247e0e750a1ec4, ASSIGN in 355 msec 2023-07-23 22:10:44,463 INFO [StoreOpener-1b39f128dc522d59fd1f5c1cd5aecf25-1] regionserver.HStore(310): Store=1b39f128dc522d59fd1f5c1cd5aecf25/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:44,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,465 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=130 2023-07-23 22:10:44,465 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=130, state=SUCCESS; OpenRegionProcedure be0bdb29001dbe7412a1304704b39ad7, server=jenkins-hbase4.apache.org,46085,1690150220016 in 191 msec 2023-07-23 22:10:44,466 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be0bdb29001dbe7412a1304704b39ad7, ASSIGN in 359 msec 2023-07-23 22:10:44,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:44,469 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1b39f128dc522d59fd1f5c1cd5aecf25; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11700708000, jitterRate=0.08971334993839264}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:44,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1b39f128dc522d59fd1f5c1cd5aecf25: 2023-07-23 22:10:44,469 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25., pid=132, masterSystemTime=1690150244418 2023-07-23 22:10:44,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,471 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,471 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=1b39f128dc522d59fd1f5c1cd5aecf25, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:44,471 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244471"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150244471"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150244471"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150244471"}]},"ts":"1690150244471"} 2023-07-23 22:10:44,473 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=128 2023-07-23 22:10:44,473 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=128, state=SUCCESS; OpenRegionProcedure 1b39f128dc522d59fd1f5c1cd5aecf25, server=jenkins-hbase4.apache.org,41457,1690150220404 in 206 msec 2023-07-23 22:10:44,474 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-23 22:10:44,475 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b39f128dc522d59fd1f5c1cd5aecf25, ASSIGN in 367 msec 2023-07-23 22:10:44,475 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:44,475 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150244475"}]},"ts":"1690150244475"} 2023-07-23 22:10:44,476 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-23 22:10:44,479 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:44,480 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 522 msec 2023-07-23 22:10:44,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-23 22:10:44,571 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 126 completed 2023-07-23 22:10:44,571 DEBUG [Listener at localhost/42675] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-23 22:10:44,571 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:44,575 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-23 22:10:44,575 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:44,575 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-23 22:10:44,576 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:44,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-23 22:10:44,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:44,583 INFO [Listener at localhost/42675] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-23 22:10:44,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-23 22:10:44,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=137, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-23 22:10:44,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-23 22:10:44,588 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150244588"}]},"ts":"1690150244588"} 2023-07-23 22:10:44,589 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-23 22:10:44,591 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-23 22:10:44,592 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e56e4043d461d05a3247e0e750a1ec4, UNASSIGN}, {pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b39f128dc522d59fd1f5c1cd5aecf25, UNASSIGN}, {pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fbe85498bd6cfea3f60711efb41a9da8, UNASSIGN}, {pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be0bdb29001dbe7412a1304704b39ad7, UNASSIGN}, {pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e65ad33f4e5fdd596a9f7281b5206929, UNASSIGN}] 2023-07-23 22:10:44,595 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e56e4043d461d05a3247e0e750a1ec4, UNASSIGN 2023-07-23 22:10:44,595 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fbe85498bd6cfea3f60711efb41a9da8, UNASSIGN 2023-07-23 22:10:44,595 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b39f128dc522d59fd1f5c1cd5aecf25, UNASSIGN 2023-07-23 22:10:44,595 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be0bdb29001dbe7412a1304704b39ad7, UNASSIGN 2023-07-23 22:10:44,595 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=137, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e65ad33f4e5fdd596a9f7281b5206929, UNASSIGN 2023-07-23 22:10:44,595 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=7e56e4043d461d05a3247e0e750a1ec4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:44,596 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244595"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244595"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244595"}]},"ts":"1690150244595"} 2023-07-23 22:10:44,596 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=fbe85498bd6cfea3f60711efb41a9da8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:44,596 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=1b39f128dc522d59fd1f5c1cd5aecf25, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:44,596 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244596"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244596"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244596"}]},"ts":"1690150244596"} 2023-07-23 22:10:44,596 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244596"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244596"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244596"}]},"ts":"1690150244596"} 2023-07-23 22:10:44,596 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=e65ad33f4e5fdd596a9f7281b5206929, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:44,596 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244596"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244596"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244596"}]},"ts":"1690150244596"} 2023-07-23 22:10:44,596 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=be0bdb29001dbe7412a1304704b39ad7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:44,597 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244596"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150244596"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150244596"}]},"ts":"1690150244596"} 2023-07-23 22:10:44,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=138, state=RUNNABLE; CloseRegionProcedure 7e56e4043d461d05a3247e0e750a1ec4, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:44,597 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=140, state=RUNNABLE; CloseRegionProcedure fbe85498bd6cfea3f60711efb41a9da8, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:44,598 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=139, state=RUNNABLE; CloseRegionProcedure 1b39f128dc522d59fd1f5c1cd5aecf25, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:44,599 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=142, state=RUNNABLE; CloseRegionProcedure e65ad33f4e5fdd596a9f7281b5206929, server=jenkins-hbase4.apache.org,41457,1690150220404}] 2023-07-23 22:10:44,600 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=141, state=RUNNABLE; CloseRegionProcedure be0bdb29001dbe7412a1304704b39ad7, server=jenkins-hbase4.apache.org,46085,1690150220016}] 2023-07-23 22:10:44,632 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 22:10:44,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-23 22:10:44,749 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,750 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,751 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be0bdb29001dbe7412a1304704b39ad7, disabling compactions & flushes 2023-07-23 22:10:44,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e65ad33f4e5fdd596a9f7281b5206929, disabling compactions & flushes 2023-07-23 22:10:44,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. after waiting 0 ms 2023-07-23 22:10:44,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. after waiting 0 ms 2023-07-23 22:10:44,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,758 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:44,759 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:44,760 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7. 2023-07-23 22:10:44,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be0bdb29001dbe7412a1304704b39ad7: 2023-07-23 22:10:44,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929. 2023-07-23 22:10:44,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e65ad33f4e5fdd596a9f7281b5206929: 2023-07-23 22:10:44,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fbe85498bd6cfea3f60711efb41a9da8, disabling compactions & flushes 2023-07-23 22:10:44,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. after waiting 0 ms 2023-07-23 22:10:44,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,764 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=be0bdb29001dbe7412a1304704b39ad7, regionState=CLOSED 2023-07-23 22:10:44,764 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244764"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244764"}]},"ts":"1690150244764"} 2023-07-23 22:10:44,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,765 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=e65ad33f4e5fdd596a9f7281b5206929, regionState=CLOSED 2023-07-23 22:10:44,765 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244765"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244765"}]},"ts":"1690150244765"} 2023-07-23 22:10:44,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7e56e4043d461d05a3247e0e750a1ec4, disabling compactions & flushes 2023-07-23 22:10:44,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. after waiting 0 ms 2023-07-23 22:10:44,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,769 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=141 2023-07-23 22:10:44,769 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=141, state=SUCCESS; CloseRegionProcedure be0bdb29001dbe7412a1304704b39ad7, server=jenkins-hbase4.apache.org,46085,1690150220016 in 165 msec 2023-07-23 22:10:44,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:44,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8. 2023-07-23 22:10:44,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fbe85498bd6cfea3f60711efb41a9da8: 2023-07-23 22:10:44,774 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=142 2023-07-23 22:10:44,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=be0bdb29001dbe7412a1304704b39ad7, UNASSIGN in 177 msec 2023-07-23 22:10:44,774 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=142, state=SUCCESS; CloseRegionProcedure e65ad33f4e5fdd596a9f7281b5206929, server=jenkins-hbase4.apache.org,41457,1690150220404 in 169 msec 2023-07-23 22:10:44,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e65ad33f4e5fdd596a9f7281b5206929, UNASSIGN in 182 msec 2023-07-23 22:10:44,776 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=fbe85498bd6cfea3f60711efb41a9da8, regionState=CLOSED 2023-07-23 22:10:44,776 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244776"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244776"}]},"ts":"1690150244776"} 2023-07-23 22:10:44,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:44,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4. 2023-07-23 22:10:44,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7e56e4043d461d05a3247e0e750a1ec4: 2023-07-23 22:10:44,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,780 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1b39f128dc522d59fd1f5c1cd5aecf25, disabling compactions & flushes 2023-07-23 22:10:44,781 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. after waiting 0 ms 2023-07-23 22:10:44,781 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,781 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=7e56e4043d461d05a3247e0e750a1ec4, regionState=CLOSED 2023-07-23 22:10:44,781 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690150244781"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244781"}]},"ts":"1690150244781"} 2023-07-23 22:10:44,781 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=140 2023-07-23 22:10:44,782 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; CloseRegionProcedure fbe85498bd6cfea3f60711efb41a9da8, server=jenkins-hbase4.apache.org,46085,1690150220016 in 180 msec 2023-07-23 22:10:44,786 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=fbe85498bd6cfea3f60711efb41a9da8, UNASSIGN in 190 msec 2023-07-23 22:10:44,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:44,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25. 2023-07-23 22:10:44,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1b39f128dc522d59fd1f5c1cd5aecf25: 2023-07-23 22:10:44,789 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=138 2023-07-23 22:10:44,789 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; CloseRegionProcedure 7e56e4043d461d05a3247e0e750a1ec4, server=jenkins-hbase4.apache.org,41457,1690150220404 in 189 msec 2023-07-23 22:10:44,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,790 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e56e4043d461d05a3247e0e750a1ec4, UNASSIGN in 197 msec 2023-07-23 22:10:44,790 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=1b39f128dc522d59fd1f5c1cd5aecf25, regionState=CLOSED 2023-07-23 22:10:44,791 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690150244790"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150244790"}]},"ts":"1690150244790"} 2023-07-23 22:10:44,793 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=139 2023-07-23 22:10:44,793 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=139, state=SUCCESS; CloseRegionProcedure 1b39f128dc522d59fd1f5c1cd5aecf25, server=jenkins-hbase4.apache.org,41457,1690150220404 in 194 msec 2023-07-23 22:10:44,795 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=137 2023-07-23 22:10:44,795 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b39f128dc522d59fd1f5c1cd5aecf25, UNASSIGN in 201 msec 2023-07-23 22:10:44,796 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150244796"}]},"ts":"1690150244796"} 2023-07-23 22:10:44,797 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-23 22:10:44,798 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-23 22:10:44,802 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=137, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 215 msec 2023-07-23 22:10:44,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=137 2023-07-23 22:10:44,890 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 137 completed 2023-07-23 22:10:44,890 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1498307518 2023-07-23 22:10:44,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1498307518 2023-07-23 22:10:44,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:44,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498307518 2023-07-23 22:10:44,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:44,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:44,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-23 22:10:44,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1498307518, current retry=0 2023-07-23 22:10:44,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1498307518. 2023-07-23 22:10:44,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:44,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:44,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:44,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-23 22:10:44,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:44,909 INFO [Listener at localhost/42675] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-23 22:10:44,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-23 22:10:44,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:44,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 918 service: MasterService methodName: DisableTable size: 89 connection: 172.31.14.131:53220 deadline: 1690150304910, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-23 22:10:44,911 DEBUG [Listener at localhost/42675] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-23 22:10:44,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-23 22:10:44,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] procedure2.ProcedureExecutor(1029): Stored pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 22:10:44,916 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=149, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 22:10:44,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1498307518' 2023-07-23 22:10:44,918 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=149, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 22:10:44,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:44,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498307518 2023-07-23 22:10:44,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:44,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:44,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-23 22:10:44,928 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,929 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,929 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,929 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,929 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,932 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8/recovered.edits] 2023-07-23 22:10:44,932 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7/recovered.edits] 2023-07-23 22:10:44,932 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4/recovered.edits] 2023-07-23 22:10:44,933 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929/recovered.edits] 2023-07-23 22:10:44,933 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25/f, FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25/recovered.edits] 2023-07-23 22:10:44,947 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8/recovered.edits/4.seqid 2023-07-23 22:10:44,948 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/fbe85498bd6cfea3f60711efb41a9da8 2023-07-23 22:10:44,949 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4/recovered.edits/4.seqid 2023-07-23 22:10:44,950 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25/recovered.edits/4.seqid 2023-07-23 22:10:44,950 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7/recovered.edits/4.seqid 2023-07-23 22:10:44,950 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/7e56e4043d461d05a3247e0e750a1ec4 2023-07-23 22:10:44,951 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/1b39f128dc522d59fd1f5c1cd5aecf25 2023-07-23 22:10:44,951 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/be0bdb29001dbe7412a1304704b39ad7 2023-07-23 22:10:44,951 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929/recovered.edits/4.seqid to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/archive/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929/recovered.edits/4.seqid 2023-07-23 22:10:44,952 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/.tmp/data/default/Group_testDisabledTableMove/e65ad33f4e5fdd596a9f7281b5206929 2023-07-23 22:10:44,952 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-23 22:10:44,955 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=149, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 22:10:44,958 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-23 22:10:44,963 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-23 22:10:44,965 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=149, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 22:10:44,965 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-23 22:10:44,965 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150244965"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:44,965 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150244965"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:44,965 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150244965"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:44,965 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150244965"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:44,965 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150244965"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:44,968 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-23 22:10:44,968 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7e56e4043d461d05a3247e0e750a1ec4, NAME => 'Group_testDisabledTableMove,,1690150243956.7e56e4043d461d05a3247e0e750a1ec4.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 1b39f128dc522d59fd1f5c1cd5aecf25, NAME => 'Group_testDisabledTableMove,aaaaa,1690150243956.1b39f128dc522d59fd1f5c1cd5aecf25.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => fbe85498bd6cfea3f60711efb41a9da8, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690150243956.fbe85498bd6cfea3f60711efb41a9da8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => be0bdb29001dbe7412a1304704b39ad7, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690150243956.be0bdb29001dbe7412a1304704b39ad7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => e65ad33f4e5fdd596a9f7281b5206929, NAME => 'Group_testDisabledTableMove,zzzzz,1690150243956.e65ad33f4e5fdd596a9f7281b5206929.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-23 22:10:44,968 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-23 22:10:44,968 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690150244968"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:44,969 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-23 22:10:44,971 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=149, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 22:10:44,972 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=149, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 59 msec 2023-07-23 22:10:45,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-23 22:10:45,029 INFO [Listener at localhost/42675] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 149 completed 2023-07-23 22:10:45,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:45,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:45,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:45,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:45,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:45,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885] to rsgroup default 2023-07-23 22:10:45,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:45,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1498307518 2023-07-23 22:10:45,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:45,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:45,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1498307518, current retry=0 2023-07-23 22:10:45,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34191,1690150220233, jenkins-hbase4.apache.org,39885,1690150225039] are moved back to Group_testDisabledTableMove_1498307518 2023-07-23 22:10:45,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1498307518 => default 2023-07-23 22:10:45,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:45,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1498307518 2023-07-23 22:10:45,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:45,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:45,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 22:10:45,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:45,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:45,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:45,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:45,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:45,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:45,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:45,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:45,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:45,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:45,058 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:45,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:45,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:45,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:45,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:45,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:45,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:45,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:45,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:45,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:45,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 952 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151445077, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:45,078 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:45,080 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:45,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:45,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:45,081 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:45,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:45,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:45,108 INFO [Listener at localhost/42675] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=502 (was 499) Potentially hanging thread: hconnection-0x3ed81975-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1632019018_17 at /127.0.0.1:58068 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_295696383_17 at /127.0.0.1:39444 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x30296f6d-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=776 (was 747) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=448 (was 448), ProcessCount=176 (was 176), AvailableMemoryMB=5774 (was 5791) 2023-07-23 22:10:45,108 WARN [Listener at localhost/42675] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-23 22:10:45,130 INFO [Listener at localhost/42675] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=502, OpenFileDescriptor=776, MaxFileDescriptor=60000, SystemLoadAverage=448, ProcessCount=176, AvailableMemoryMB=5777 2023-07-23 22:10:45,130 WARN [Listener at localhost/42675] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-23 22:10:45,130 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-23 22:10:45,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:45,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:45,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:45,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:45,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:45,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:45,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:45,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:45,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:45,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:45,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:45,147 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:45,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:45,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:45,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:45,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:45,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:45,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:45,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:45,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37045] to rsgroup master 2023-07-23 22:10:45,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:45,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] ipc.CallRunner(144): callId: 980 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:53220 deadline: 1690151445159, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. 2023-07-23 22:10:45,160 WARN [Listener at localhost/42675] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37045 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:45,162 INFO [Listener at localhost/42675] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:45,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:45,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:45,163 INFO [Listener at localhost/42675] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34191, jenkins-hbase4.apache.org:39885, jenkins-hbase4.apache.org:41457, jenkins-hbase4.apache.org:46085], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:45,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:45,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37045] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:45,164 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 22:10:45,164 INFO [Listener at localhost/42675] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 22:10:45,164 DEBUG [Listener at localhost/42675] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2ea64db6 to 127.0.0.1:52385 2023-07-23 22:10:45,164 DEBUG [Listener at localhost/42675] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,166 DEBUG [Listener at localhost/42675] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 22:10:45,166 DEBUG [Listener at localhost/42675] util.JVMClusterUtil(257): Found active master hash=807557898, stopped=false 2023-07-23 22:10:45,166 DEBUG [Listener at localhost/42675] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 22:10:45,167 DEBUG [Listener at localhost/42675] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 22:10:45,167 INFO [Listener at localhost/42675] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:45,271 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:45,271 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:45,271 INFO [Listener at localhost/42675] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 22:10:45,271 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:45,271 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:45,271 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:45,272 DEBUG [Listener at localhost/42675] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3969437c to 127.0.0.1:52385 2023-07-23 22:10:45,272 DEBUG [Listener at localhost/42675] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,273 INFO [Listener at localhost/42675] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46085,1690150220016' ***** 2023-07-23 22:10:45,273 INFO [Listener at localhost/42675] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:45,273 INFO [Listener at localhost/42675] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34191,1690150220233' ***** 2023-07-23 22:10:45,273 INFO [Listener at localhost/42675] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:45,273 INFO [Listener at localhost/42675] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41457,1690150220404' ***** 2023-07-23 22:10:45,273 INFO [Listener at localhost/42675] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:45,273 INFO [Listener at localhost/42675] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39885,1690150225039' ***** 2023-07-23 22:10:45,273 INFO [Listener at localhost/42675] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:45,273 INFO [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:45,271 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:45,273 INFO [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:45,273 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:45,273 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:45,278 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:45,279 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:45,279 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:45,280 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:45,286 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:45,289 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:45,289 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:45,289 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:45,289 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:45,289 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:45,290 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:45,295 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:45,295 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:45,296 INFO [RS:2;jenkins-hbase4:41457] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@62311427{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:45,296 INFO [RS:3;jenkins-hbase4:39885] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@780935ef{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:45,296 INFO [RS:1;jenkins-hbase4:34191] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@617ee32b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:45,296 INFO [RS:0;jenkins-hbase4:46085] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@311e7d3c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:45,300 INFO [RS:0;jenkins-hbase4:46085] server.AbstractConnector(383): Stopped ServerConnector@efe2f50{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:45,300 INFO [RS:3;jenkins-hbase4:39885] server.AbstractConnector(383): Stopped ServerConnector@6057e31f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:45,300 INFO [RS:0;jenkins-hbase4:46085] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:45,300 INFO [RS:2;jenkins-hbase4:41457] server.AbstractConnector(383): Stopped ServerConnector@77041a21{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:45,300 INFO [RS:3;jenkins-hbase4:39885] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:45,301 INFO [RS:0;jenkins-hbase4:46085] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1eb685a1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:45,301 INFO [RS:1;jenkins-hbase4:34191] server.AbstractConnector(383): Stopped ServerConnector@7edcaee8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:45,301 INFO [RS:2;jenkins-hbase4:41457] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:45,303 INFO [RS:1;jenkins-hbase4:34191] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:45,303 INFO [RS:3;jenkins-hbase4:39885] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6dd20c46{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:45,307 INFO [RS:0;jenkins-hbase4:46085] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5033ffc4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:45,310 INFO [RS:1;jenkins-hbase4:34191] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fe3f683{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:45,310 INFO [RS:2;jenkins-hbase4:41457] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@505a01fa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:45,311 INFO [RS:1;jenkins-hbase4:34191] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1f6d07f1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:45,310 INFO [RS:3;jenkins-hbase4:39885] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2c02caab{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:45,315 INFO [RS:2;jenkins-hbase4:41457] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1f7fbf09{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:45,316 INFO [RS:0;jenkins-hbase4:46085] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:45,316 INFO [RS:2;jenkins-hbase4:41457] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:45,316 INFO [RS:2;jenkins-hbase4:41457] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:45,317 INFO [RS:2;jenkins-hbase4:41457] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:45,317 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(3305): Received CLOSE for f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:45,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f992c8675ecb32048f91b79a022c662a, disabling compactions & flushes 2023-07-23 22:10:45,318 INFO [RS:0;jenkins-hbase4:46085] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:45,318 INFO [RS:0;jenkins-hbase4:46085] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:45,318 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(3305): Received CLOSE for f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:45,318 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(3305): Received CLOSE for 4f43d3d673ff6f774d9a1575bc603179 2023-07-23 22:10:45,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f13ed9a05b812dd1ab7a8c5d46530103, disabling compactions & flushes 2023-07-23 22:10:45,319 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(3305): Received CLOSE for 4720e37820d079fec06cb3ab19dd54a2 2023-07-23 22:10:45,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:45,319 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:45,319 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:45,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:45,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. after waiting 0 ms 2023-07-23 22:10:45,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:45,319 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f13ed9a05b812dd1ab7a8c5d46530103 1/1 column families, dataSize=22.10 KB heapSize=36.55 KB 2023-07-23 22:10:45,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:45,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. after waiting 0 ms 2023-07-23 22:10:45,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:45,320 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:45,320 DEBUG [RS:2;jenkins-hbase4:41457] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x02648456 to 127.0.0.1:52385 2023-07-23 22:10:45,320 DEBUG [RS:2;jenkins-hbase4:41457] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,320 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 22:10:45,320 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1478): Online Regions={f992c8675ecb32048f91b79a022c662a=testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a.} 2023-07-23 22:10:45,321 DEBUG [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1504): Waiting on f992c8675ecb32048f91b79a022c662a 2023-07-23 22:10:45,321 INFO [RS:3;jenkins-hbase4:39885] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:45,321 INFO [RS:3;jenkins-hbase4:39885] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:45,321 INFO [RS:3;jenkins-hbase4:39885] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:45,321 INFO [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:45,321 DEBUG [RS:3;jenkins-hbase4:39885] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4f9c4082 to 127.0.0.1:52385 2023-07-23 22:10:45,321 DEBUG [RS:3;jenkins-hbase4:39885] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,322 INFO [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39885,1690150225039; all regions closed. 2023-07-23 22:10:45,322 DEBUG [RS:0;jenkins-hbase4:46085] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66764ba4 to 127.0.0.1:52385 2023-07-23 22:10:45,322 DEBUG [RS:0;jenkins-hbase4:46085] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,323 INFO [RS:0;jenkins-hbase4:46085] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:45,323 INFO [RS:0;jenkins-hbase4:46085] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:45,323 INFO [RS:0;jenkins-hbase4:46085] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:45,323 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 22:10:45,324 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-23 22:10:45,334 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1478): Online Regions={f13ed9a05b812dd1ab7a8c5d46530103=hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103., 4f43d3d673ff6f774d9a1575bc603179=unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179., 1588230740=hbase:meta,,1.1588230740, 4720e37820d079fec06cb3ab19dd54a2=hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2.} 2023-07-23 22:10:45,324 INFO [RS:1;jenkins-hbase4:34191] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:45,335 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1504): Waiting on 1588230740, 4720e37820d079fec06cb3ab19dd54a2, 4f43d3d673ff6f774d9a1575bc603179, f13ed9a05b812dd1ab7a8c5d46530103 2023-07-23 22:10:45,335 INFO [RS:1;jenkins-hbase4:34191] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:45,334 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 22:10:45,335 INFO [RS:1;jenkins-hbase4:34191] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:45,335 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 22:10:45,335 INFO [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:45,335 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 22:10:45,335 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 22:10:45,335 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 22:10:45,335 DEBUG [RS:1;jenkins-hbase4:34191] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d3ba0b4 to 127.0.0.1:52385 2023-07-23 22:10:45,336 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=77.90 KB heapSize=122.84 KB 2023-07-23 22:10:45,336 DEBUG [RS:1;jenkins-hbase4:34191] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,336 INFO [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34191,1690150220233; all regions closed. 2023-07-23 22:10:45,369 DEBUG [RS:3;jenkins-hbase4:39885] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs 2023-07-23 22:10:45,370 INFO [RS:3;jenkins-hbase4:39885] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39885%2C1690150225039:(num 1690150225520) 2023-07-23 22:10:45,370 DEBUG [RS:3;jenkins-hbase4:39885] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,370 INFO [RS:3;jenkins-hbase4:39885] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:45,370 DEBUG [RS:1;jenkins-hbase4:34191] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs 2023-07-23 22:10:45,370 INFO [RS:1;jenkins-hbase4:34191] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34191%2C1690150220233:(num 1690150222705) 2023-07-23 22:10:45,370 DEBUG [RS:1;jenkins-hbase4:34191] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,370 INFO [RS:1;jenkins-hbase4:34191] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:45,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/testRename/f992c8675ecb32048f91b79a022c662a/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 22:10:45,397 INFO [RS:3;jenkins-hbase4:39885] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:45,397 INFO [RS:1;jenkins-hbase4:34191] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:45,397 INFO [RS:3;jenkins-hbase4:39885] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:45,397 INFO [RS:3;jenkins-hbase4:39885] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:45,397 INFO [RS:3;jenkins-hbase4:39885] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:45,398 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:45,403 INFO [RS:1;jenkins-hbase4:34191] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:45,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:45,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f992c8675ecb32048f91b79a022c662a: 2023-07-23 22:10:45,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690150238348.f992c8675ecb32048f91b79a022c662a. 2023-07-23 22:10:45,404 INFO [RS:3;jenkins-hbase4:39885] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39885 2023-07-23 22:10:45,403 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:45,404 INFO [RS:1;jenkins-hbase4:34191] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:45,406 INFO [RS:1;jenkins-hbase4:34191] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:45,407 INFO [RS:1;jenkins-hbase4:34191] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34191 2023-07-23 22:10:45,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.10 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/.tmp/m/00f0fccfde414d45b92face08cf78bea 2023-07-23 22:10:45,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 00f0fccfde414d45b92face08cf78bea 2023-07-23 22:10:45,420 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=71.92 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/.tmp/info/cf4cb1877a584c7289fc712244fa85b0 2023-07-23 22:10:45,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/.tmp/m/00f0fccfde414d45b92face08cf78bea as hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/m/00f0fccfde414d45b92face08cf78bea 2023-07-23 22:10:45,427 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:45,427 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:45,427 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:45,427 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:45,427 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:45,427 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:45,428 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:45,428 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39885,1690150225039 2023-07-23 22:10:45,428 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:45,428 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cf4cb1877a584c7289fc712244fa85b0 2023-07-23 22:10:45,429 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39885,1690150225039] 2023-07-23 22:10:45,429 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:45,429 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:45,429 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:45,429 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39885,1690150225039; numProcessing=1 2023-07-23 22:10:45,429 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34191,1690150220233 2023-07-23 22:10:45,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 00f0fccfde414d45b92face08cf78bea 2023-07-23 22:10:45,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/m/00f0fccfde414d45b92face08cf78bea, entries=22, sequenceid=101, filesize=5.9 K 2023-07-23 22:10:45,434 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39885,1690150225039 already deleted, retry=false 2023-07-23 22:10:45,434 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39885,1690150225039 expired; onlineServers=3 2023-07-23 22:10:45,434 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34191,1690150220233] 2023-07-23 22:10:45,434 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34191,1690150220233; numProcessing=2 2023-07-23 22:10:45,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.10 KB/22631, heapSize ~36.53 KB/37408, currentSize=0 B/0 for f13ed9a05b812dd1ab7a8c5d46530103 in 116ms, sequenceid=101, compaction requested=false 2023-07-23 22:10:45,436 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34191,1690150220233 already deleted, retry=false 2023-07-23 22:10:45,436 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34191,1690150220233 expired; onlineServers=2 2023-07-23 22:10:45,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/rsgroup/f13ed9a05b812dd1ab7a8c5d46530103/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-23 22:10:45,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:45,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:45,453 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f13ed9a05b812dd1ab7a8c5d46530103: 2023-07-23 22:10:45,453 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690150224126.f13ed9a05b812dd1ab7a8c5d46530103. 2023-07-23 22:10:45,453 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4f43d3d673ff6f774d9a1575bc603179, disabling compactions & flushes 2023-07-23 22:10:45,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:45,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:45,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. after waiting 0 ms 2023-07-23 22:10:45,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:45,456 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/.tmp/rep_barrier/92830afd2f6e4eb98ce29d935d649240 2023-07-23 22:10:45,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/default/unmovedTable/4f43d3d673ff6f774d9a1575bc603179/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 22:10:45,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:45,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4f43d3d673ff6f774d9a1575bc603179: 2023-07-23 22:10:45,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690150240003.4f43d3d673ff6f774d9a1575bc603179. 2023-07-23 22:10:45,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4720e37820d079fec06cb3ab19dd54a2, disabling compactions & flushes 2023-07-23 22:10:45,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:45,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:45,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. after waiting 0 ms 2023-07-23 22:10:45,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:45,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 4720e37820d079fec06cb3ab19dd54a2 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-23 22:10:45,465 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92830afd2f6e4eb98ce29d935d649240 2023-07-23 22:10:45,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2/.tmp/info/f9b16549f3524a0aae3a333b163fc07d 2023-07-23 22:10:45,485 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=200 (bloomFilter=false), to=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/.tmp/table/ee584d18e8e844149ebae38097655b51 2023-07-23 22:10:45,493 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee584d18e8e844149ebae38097655b51 2023-07-23 22:10:45,494 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2/.tmp/info/f9b16549f3524a0aae3a333b163fc07d as hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2/info/f9b16549f3524a0aae3a333b163fc07d 2023-07-23 22:10:45,494 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/.tmp/info/cf4cb1877a584c7289fc712244fa85b0 as hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/info/cf4cb1877a584c7289fc712244fa85b0 2023-07-23 22:10:45,502 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cf4cb1877a584c7289fc712244fa85b0 2023-07-23 22:10:45,502 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/info/cf4cb1877a584c7289fc712244fa85b0, entries=97, sequenceid=200, filesize=15.9 K 2023-07-23 22:10:45,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2/info/f9b16549f3524a0aae3a333b163fc07d, entries=2, sequenceid=6, filesize=4.8 K 2023-07-23 22:10:45,503 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/.tmp/rep_barrier/92830afd2f6e4eb98ce29d935d649240 as hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/rep_barrier/92830afd2f6e4eb98ce29d935d649240 2023-07-23 22:10:45,504 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-23 22:10:45,504 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-23 22:10:45,505 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 4720e37820d079fec06cb3ab19dd54a2 in 42ms, sequenceid=6, compaction requested=false 2023-07-23 22:10:45,512 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92830afd2f6e4eb98ce29d935d649240 2023-07-23 22:10:45,512 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/rep_barrier/92830afd2f6e4eb98ce29d935d649240, entries=18, sequenceid=200, filesize=6.9 K 2023-07-23 22:10:45,513 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/.tmp/table/ee584d18e8e844149ebae38097655b51 as hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/table/ee584d18e8e844149ebae38097655b51 2023-07-23 22:10:45,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/namespace/4720e37820d079fec06cb3ab19dd54a2/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-23 22:10:45,521 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41457,1690150220404; all regions closed. 2023-07-23 22:10:45,523 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee584d18e8e844149ebae38097655b51 2023-07-23 22:10:45,523 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/table/ee584d18e8e844149ebae38097655b51, entries=31, sequenceid=200, filesize=7.4 K 2023-07-23 22:10:45,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:45,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4720e37820d079fec06cb3ab19dd54a2: 2023-07-23 22:10:45,528 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690150223750.4720e37820d079fec06cb3ab19dd54a2. 2023-07-23 22:10:45,528 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~77.90 KB/79773, heapSize ~122.79 KB/125736, currentSize=0 B/0 for 1588230740 in 192ms, sequenceid=200, compaction requested=false 2023-07-23 22:10:45,535 DEBUG [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-23 22:10:45,541 DEBUG [RS:2;jenkins-hbase4:41457] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs 2023-07-23 22:10:45,541 INFO [RS:2;jenkins-hbase4:41457] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41457%2C1690150220404:(num 1690150222702) 2023-07-23 22:10:45,541 DEBUG [RS:2;jenkins-hbase4:41457] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,541 INFO [RS:2;jenkins-hbase4:41457] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:45,541 INFO [RS:2;jenkins-hbase4:41457] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:45,541 INFO [RS:2;jenkins-hbase4:41457] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:45,541 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:45,541 INFO [RS:2;jenkins-hbase4:41457] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:45,542 INFO [RS:2;jenkins-hbase4:41457] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:45,542 INFO [RS:2;jenkins-hbase4:41457] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41457 2023-07-23 22:10:45,546 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/data/hbase/meta/1588230740/recovered.edits/203.seqid, newMaxSeqId=203, maxSeqId=1 2023-07-23 22:10:45,547 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:45,548 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 22:10:45,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 22:10:45,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 22:10:45,548 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:45,548 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:45,548 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41457,1690150220404 2023-07-23 22:10:45,549 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41457,1690150220404] 2023-07-23 22:10:45,550 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41457,1690150220404; numProcessing=3 2023-07-23 22:10:45,552 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41457,1690150220404 already deleted, retry=false 2023-07-23 22:10:45,552 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41457,1690150220404 expired; onlineServers=1 2023-07-23 22:10:45,587 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:45,587 INFO [RS:1;jenkins-hbase4:34191] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34191,1690150220233; zookeeper connection closed. 2023-07-23 22:10:45,587 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:34191-0x101943c28b20002, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:45,587 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6c2659d5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6c2659d5 2023-07-23 22:10:45,687 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:45,687 INFO [RS:3;jenkins-hbase4:39885] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39885,1690150225039; zookeeper connection closed. 2023-07-23 22:10:45,687 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:39885-0x101943c28b2000b, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:45,688 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6de29ee1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6de29ee1 2023-07-23 22:10:45,735 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46085,1690150220016; all regions closed. 2023-07-23 22:10:45,741 DEBUG [RS:0;jenkins-hbase4:46085] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs 2023-07-23 22:10:45,741 INFO [RS:0;jenkins-hbase4:46085] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46085%2C1690150220016.meta:.meta(num 1690150223419) 2023-07-23 22:10:45,746 DEBUG [RS:0;jenkins-hbase4:46085] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/oldWALs 2023-07-23 22:10:45,746 INFO [RS:0;jenkins-hbase4:46085] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46085%2C1690150220016:(num 1690150222702) 2023-07-23 22:10:45,746 DEBUG [RS:0;jenkins-hbase4:46085] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,746 INFO [RS:0;jenkins-hbase4:46085] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:45,747 INFO [RS:0;jenkins-hbase4:46085] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:45,747 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:45,747 INFO [RS:0;jenkins-hbase4:46085] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46085 2023-07-23 22:10:45,751 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46085,1690150220016 2023-07-23 22:10:45,751 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:45,752 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46085,1690150220016] 2023-07-23 22:10:45,752 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46085,1690150220016; numProcessing=4 2023-07-23 22:10:45,753 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46085,1690150220016 already deleted, retry=false 2023-07-23 22:10:45,753 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46085,1690150220016 expired; onlineServers=0 2023-07-23 22:10:45,753 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37045,1690150218110' ***** 2023-07-23 22:10:45,753 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 22:10:45,754 DEBUG [M:0;jenkins-hbase4:37045] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@711f5131, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:45,754 INFO [M:0;jenkins-hbase4:37045] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:45,756 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:45,756 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:45,756 INFO [M:0;jenkins-hbase4:37045] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@25becdec{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 22:10:45,756 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:45,756 INFO [M:0;jenkins-hbase4:37045] server.AbstractConnector(383): Stopped ServerConnector@6c6f2d1b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:45,757 INFO [M:0;jenkins-hbase4:37045] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:45,757 INFO [M:0;jenkins-hbase4:37045] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2269cb1d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:45,757 INFO [M:0;jenkins-hbase4:37045] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1e48a43a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:45,758 INFO [M:0;jenkins-hbase4:37045] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37045,1690150218110 2023-07-23 22:10:45,758 INFO [M:0;jenkins-hbase4:37045] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37045,1690150218110; all regions closed. 2023-07-23 22:10:45,758 DEBUG [M:0;jenkins-hbase4:37045] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:45,758 INFO [M:0;jenkins-hbase4:37045] master.HMaster(1491): Stopping master jetty server 2023-07-23 22:10:45,759 INFO [M:0;jenkins-hbase4:37045] server.AbstractConnector(383): Stopped ServerConnector@399a4cd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:45,759 DEBUG [M:0;jenkins-hbase4:37045] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 22:10:45,759 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 22:10:45,759 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150222079] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150222079,5,FailOnTimeoutGroup] 2023-07-23 22:10:45,759 DEBUG [M:0;jenkins-hbase4:37045] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 22:10:45,759 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150222076] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150222076,5,FailOnTimeoutGroup] 2023-07-23 22:10:45,759 INFO [M:0;jenkins-hbase4:37045] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 22:10:45,760 INFO [M:0;jenkins-hbase4:37045] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 22:10:45,760 INFO [M:0;jenkins-hbase4:37045] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-23 22:10:45,760 DEBUG [M:0;jenkins-hbase4:37045] master.HMaster(1512): Stopping service threads 2023-07-23 22:10:45,760 INFO [M:0;jenkins-hbase4:37045] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 22:10:45,760 ERROR [M:0;jenkins-hbase4:37045] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-23 22:10:45,761 INFO [M:0;jenkins-hbase4:37045] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 22:10:45,761 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 22:10:45,761 DEBUG [M:0;jenkins-hbase4:37045] zookeeper.ZKUtil(398): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 22:10:45,762 WARN [M:0;jenkins-hbase4:37045] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 22:10:45,762 INFO [M:0;jenkins-hbase4:37045] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 22:10:45,762 INFO [M:0;jenkins-hbase4:37045] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 22:10:45,762 DEBUG [M:0;jenkins-hbase4:37045] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 22:10:45,762 INFO [M:0;jenkins-hbase4:37045] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:45,762 DEBUG [M:0;jenkins-hbase4:37045] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:45,762 DEBUG [M:0;jenkins-hbase4:37045] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 22:10:45,762 DEBUG [M:0;jenkins-hbase4:37045] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:45,762 INFO [M:0;jenkins-hbase4:37045] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=500.02 KB heapSize=598.02 KB 2023-07-23 22:10:45,778 INFO [M:0;jenkins-hbase4:37045] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=500.02 KB at sequenceid=1104 (bloomFilter=true), to=hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1bc1d453de0645ddaf7fb25a00bc3a2a 2023-07-23 22:10:45,784 DEBUG [M:0;jenkins-hbase4:37045] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1bc1d453de0645ddaf7fb25a00bc3a2a as hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1bc1d453de0645ddaf7fb25a00bc3a2a 2023-07-23 22:10:45,789 INFO [M:0;jenkins-hbase4:37045] regionserver.HStore(1080): Added hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1bc1d453de0645ddaf7fb25a00bc3a2a, entries=148, sequenceid=1104, filesize=26.2 K 2023-07-23 22:10:45,790 INFO [M:0;jenkins-hbase4:37045] regionserver.HRegion(2948): Finished flush of dataSize ~500.02 KB/512017, heapSize ~598.01 KB/612360, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=1104, compaction requested=false 2023-07-23 22:10:45,791 INFO [M:0;jenkins-hbase4:37045] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:45,791 DEBUG [M:0;jenkins-hbase4:37045] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 22:10:45,795 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:45,795 INFO [M:0;jenkins-hbase4:37045] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 22:10:45,795 INFO [M:0;jenkins-hbase4:37045] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37045 2023-07-23 22:10:45,800 DEBUG [M:0;jenkins-hbase4:37045] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37045,1690150218110 already deleted, retry=false 2023-07-23 22:10:45,988 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:45,988 INFO [M:0;jenkins-hbase4:37045] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37045,1690150218110; zookeeper connection closed. 2023-07-23 22:10:45,988 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): master:37045-0x101943c28b20000, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:46,088 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:46,088 INFO [RS:0;jenkins-hbase4:46085] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46085,1690150220016; zookeeper connection closed. 2023-07-23 22:10:46,088 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:46085-0x101943c28b20001, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:46,089 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@62e8f13e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@62e8f13e 2023-07-23 22:10:46,188 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:46,188 INFO [RS:2;jenkins-hbase4:41457] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41457,1690150220404; zookeeper connection closed. 2023-07-23 22:10:46,188 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): regionserver:41457-0x101943c28b20003, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:46,189 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d43a032] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d43a032 2023-07-23 22:10:46,189 INFO [Listener at localhost/42675] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-23 22:10:46,189 WARN [Listener at localhost/42675] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 22:10:46,194 INFO [Listener at localhost/42675] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:46,297 WARN [BP-1350916087-172.31.14.131-1690150214233 heartbeating to localhost/127.0.0.1:36271] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 22:10:46,298 WARN [BP-1350916087-172.31.14.131-1690150214233 heartbeating to localhost/127.0.0.1:36271] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1350916087-172.31.14.131-1690150214233 (Datanode Uuid 815b915b-18a9-409a-9813-837d7f9c5956) service to localhost/127.0.0.1:36271 2023-07-23 22:10:46,299 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data5/current/BP-1350916087-172.31.14.131-1690150214233] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:46,299 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data6/current/BP-1350916087-172.31.14.131-1690150214233] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:46,302 WARN [Listener at localhost/42675] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 22:10:46,305 INFO [Listener at localhost/42675] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:46,408 WARN [BP-1350916087-172.31.14.131-1690150214233 heartbeating to localhost/127.0.0.1:36271] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 22:10:46,408 WARN [BP-1350916087-172.31.14.131-1690150214233 heartbeating to localhost/127.0.0.1:36271] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1350916087-172.31.14.131-1690150214233 (Datanode Uuid e1821169-9edd-4e9d-bee0-ae4582003e75) service to localhost/127.0.0.1:36271 2023-07-23 22:10:46,409 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data3/current/BP-1350916087-172.31.14.131-1690150214233] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:46,409 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data4/current/BP-1350916087-172.31.14.131-1690150214233] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:46,411 WARN [Listener at localhost/42675] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 22:10:46,413 INFO [Listener at localhost/42675] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:46,516 WARN [BP-1350916087-172.31.14.131-1690150214233 heartbeating to localhost/127.0.0.1:36271] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 22:10:46,516 WARN [BP-1350916087-172.31.14.131-1690150214233 heartbeating to localhost/127.0.0.1:36271] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1350916087-172.31.14.131-1690150214233 (Datanode Uuid 14886a0f-ba7a-4d66-a809-40a77b2db62c) service to localhost/127.0.0.1:36271 2023-07-23 22:10:46,517 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data1/current/BP-1350916087-172.31.14.131-1690150214233] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:46,517 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/cluster_8eb791f9-f569-03c2-214d-a0cff93b61c9/dfs/data/data2/current/BP-1350916087-172.31.14.131-1690150214233] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:46,546 INFO [Listener at localhost/42675] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:46,670 INFO [Listener at localhost/42675] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 22:10:46,725 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-23 22:10:46,725 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 22:10:46,725 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.log.dir so I do NOT create it in target/test-data/61047217-f98e-161f-af73-83a1e8d795c7 2023-07-23 22:10:46,725 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b85df491-0aa9-6753-69c5-648e2e247c65/hadoop.tmp.dir so I do NOT create it in target/test-data/61047217-f98e-161f-af73-83a1e8d795c7 2023-07-23 22:10:46,725 INFO [Listener at localhost/42675] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74, deleteOnExit=true 2023-07-23 22:10:46,725 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 22:10:46,725 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/test.cache.data in system properties and HBase conf 2023-07-23 22:10:46,726 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 22:10:46,726 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir in system properties and HBase conf 2023-07-23 22:10:46,726 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 22:10:46,726 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 22:10:46,726 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 22:10:46,726 DEBUG [Listener at localhost/42675] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 22:10:46,727 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 22:10:46,727 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 22:10:46,727 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 22:10:46,727 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 22:10:46,727 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 22:10:46,727 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 22:10:46,727 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 22:10:46,727 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 22:10:46,727 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 22:10:46,728 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/nfs.dump.dir in system properties and HBase conf 2023-07-23 22:10:46,728 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/java.io.tmpdir in system properties and HBase conf 2023-07-23 22:10:46,728 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 22:10:46,728 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 22:10:46,728 INFO [Listener at localhost/42675] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 22:10:46,732 WARN [Listener at localhost/42675] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 22:10:46,733 WARN [Listener at localhost/42675] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 22:10:46,766 DEBUG [Listener at localhost/42675-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101943c28b2000a, quorum=127.0.0.1:52385, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-23 22:10:46,766 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101943c28b2000a, quorum=127.0.0.1:52385, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-23 22:10:46,776 WARN [Listener at localhost/42675] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:46,778 INFO [Listener at localhost/42675] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:46,782 INFO [Listener at localhost/42675] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/java.io.tmpdir/Jetty_localhost_40109_hdfs____txjk4o/webapp 2023-07-23 22:10:46,874 INFO [Listener at localhost/42675] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40109 2023-07-23 22:10:46,880 WARN [Listener at localhost/42675] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 22:10:46,880 WARN [Listener at localhost/42675] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 22:10:46,923 WARN [Listener at localhost/45671] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:46,935 WARN [Listener at localhost/45671] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 22:10:46,937 WARN [Listener at localhost/45671] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:46,938 INFO [Listener at localhost/45671] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:46,942 INFO [Listener at localhost/45671] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/java.io.tmpdir/Jetty_localhost_34611_datanode____mc0kzg/webapp 2023-07-23 22:10:47,041 INFO [Listener at localhost/45671] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34611 2023-07-23 22:10:47,048 WARN [Listener at localhost/34149] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:47,071 WARN [Listener at localhost/34149] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 22:10:47,074 WARN [Listener at localhost/34149] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:47,075 INFO [Listener at localhost/34149] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:47,082 INFO [Listener at localhost/34149] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/java.io.tmpdir/Jetty_localhost_39189_datanode____.ssoh1p/webapp 2023-07-23 22:10:47,165 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe53bb14120db9f4: Processing first storage report for DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3 from datanode 88c93ba2-e254-4c02-995b-b2ca27337e92 2023-07-23 22:10:47,165 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe53bb14120db9f4: from storage DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3 node DatanodeRegistration(127.0.0.1:35991, datanodeUuid=88c93ba2-e254-4c02-995b-b2ca27337e92, infoPort=44119, infoSecurePort=0, ipcPort=34149, storageInfo=lv=-57;cid=testClusterID;nsid=1571365150;c=1690150246735), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:47,165 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xfe53bb14120db9f4: Processing first storage report for DS-77b84aeb-e6ab-47af-91fe-cc3a285711f2 from datanode 88c93ba2-e254-4c02-995b-b2ca27337e92 2023-07-23 22:10:47,165 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xfe53bb14120db9f4: from storage DS-77b84aeb-e6ab-47af-91fe-cc3a285711f2 node DatanodeRegistration(127.0.0.1:35991, datanodeUuid=88c93ba2-e254-4c02-995b-b2ca27337e92, infoPort=44119, infoSecurePort=0, ipcPort=34149, storageInfo=lv=-57;cid=testClusterID;nsid=1571365150;c=1690150246735), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:47,203 INFO [Listener at localhost/34149] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39189 2023-07-23 22:10:47,210 WARN [Listener at localhost/38855] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:47,231 WARN [Listener at localhost/38855] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 22:10:47,234 WARN [Listener at localhost/38855] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:47,236 INFO [Listener at localhost/38855] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:47,239 INFO [Listener at localhost/38855] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/java.io.tmpdir/Jetty_localhost_40067_datanode____1d8po/webapp 2023-07-23 22:10:47,325 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa51157d87d58d30e: Processing first storage report for DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e from datanode 2185afd8-b75f-4a0d-8426-a5f39050e1d2 2023-07-23 22:10:47,325 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa51157d87d58d30e: from storage DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e node DatanodeRegistration(127.0.0.1:34037, datanodeUuid=2185afd8-b75f-4a0d-8426-a5f39050e1d2, infoPort=35821, infoSecurePort=0, ipcPort=38855, storageInfo=lv=-57;cid=testClusterID;nsid=1571365150;c=1690150246735), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:47,325 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa51157d87d58d30e: Processing first storage report for DS-8be142fb-6139-4e0b-9489-043a8eb51f5c from datanode 2185afd8-b75f-4a0d-8426-a5f39050e1d2 2023-07-23 22:10:47,325 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa51157d87d58d30e: from storage DS-8be142fb-6139-4e0b-9489-043a8eb51f5c node DatanodeRegistration(127.0.0.1:34037, datanodeUuid=2185afd8-b75f-4a0d-8426-a5f39050e1d2, infoPort=35821, infoSecurePort=0, ipcPort=38855, storageInfo=lv=-57;cid=testClusterID;nsid=1571365150;c=1690150246735), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:47,355 INFO [Listener at localhost/38855] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40067 2023-07-23 22:10:47,366 WARN [Listener at localhost/45331] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:47,475 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2851432098da724e: Processing first storage report for DS-649afdef-5689-49d4-b144-700c1d2d2477 from datanode 494eeee5-a62f-43be-8fac-ee6c4c18ea90 2023-07-23 22:10:47,476 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2851432098da724e: from storage DS-649afdef-5689-49d4-b144-700c1d2d2477 node DatanodeRegistration(127.0.0.1:42979, datanodeUuid=494eeee5-a62f-43be-8fac-ee6c4c18ea90, infoPort=34313, infoSecurePort=0, ipcPort=45331, storageInfo=lv=-57;cid=testClusterID;nsid=1571365150;c=1690150246735), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:47,476 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2851432098da724e: Processing first storage report for DS-23eaa280-a579-4a3e-a80e-aaae2d53e2d8 from datanode 494eeee5-a62f-43be-8fac-ee6c4c18ea90 2023-07-23 22:10:47,476 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2851432098da724e: from storage DS-23eaa280-a579-4a3e-a80e-aaae2d53e2d8 node DatanodeRegistration(127.0.0.1:42979, datanodeUuid=494eeee5-a62f-43be-8fac-ee6c4c18ea90, infoPort=34313, infoSecurePort=0, ipcPort=45331, storageInfo=lv=-57;cid=testClusterID;nsid=1571365150;c=1690150246735), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:47,479 DEBUG [Listener at localhost/45331] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7 2023-07-23 22:10:47,484 INFO [Listener at localhost/45331] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74/zookeeper_0, clientPort=61961, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 22:10:47,486 INFO [Listener at localhost/45331] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=61961 2023-07-23 22:10:47,486 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:47,488 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:47,516 INFO [Listener at localhost/45331] util.FSUtils(471): Created version file at hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f with version=8 2023-07-23 22:10:47,516 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/hbase-staging 2023-07-23 22:10:47,517 DEBUG [Listener at localhost/45331] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 22:10:47,517 DEBUG [Listener at localhost/45331] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 22:10:47,517 DEBUG [Listener at localhost/45331] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 22:10:47,517 DEBUG [Listener at localhost/45331] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 22:10:47,518 INFO [Listener at localhost/45331] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:47,518 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:47,518 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:47,518 INFO [Listener at localhost/45331] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:47,518 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:47,518 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:47,519 INFO [Listener at localhost/45331] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:47,520 INFO [Listener at localhost/45331] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35043 2023-07-23 22:10:47,521 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:47,522 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:47,524 INFO [Listener at localhost/45331] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35043 connecting to ZooKeeper ensemble=127.0.0.1:61961 2023-07-23 22:10:47,532 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:350430x0, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:47,533 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35043-0x101943c9f3f0000 connected 2023-07-23 22:10:47,551 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:47,552 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:47,552 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:47,555 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35043 2023-07-23 22:10:47,557 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35043 2023-07-23 22:10:47,558 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35043 2023-07-23 22:10:47,558 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35043 2023-07-23 22:10:47,560 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35043 2023-07-23 22:10:47,562 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:47,563 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:47,563 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:47,563 INFO [Listener at localhost/45331] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 22:10:47,563 INFO [Listener at localhost/45331] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:47,563 INFO [Listener at localhost/45331] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:47,563 INFO [Listener at localhost/45331] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:47,564 INFO [Listener at localhost/45331] http.HttpServer(1146): Jetty bound to port 39653 2023-07-23 22:10:47,564 INFO [Listener at localhost/45331] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:47,567 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:47,567 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ef61f93{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:47,568 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:47,568 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3bc6c1b7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:47,691 INFO [Listener at localhost/45331] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:47,692 INFO [Listener at localhost/45331] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:47,692 INFO [Listener at localhost/45331] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:47,693 INFO [Listener at localhost/45331] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 22:10:47,694 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:47,695 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2dc307a3{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/java.io.tmpdir/jetty-0_0_0_0-39653-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6217809671621196930/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 22:10:47,697 INFO [Listener at localhost/45331] server.AbstractConnector(333): Started ServerConnector@2eb259{HTTP/1.1, (http/1.1)}{0.0.0.0:39653} 2023-07-23 22:10:47,697 INFO [Listener at localhost/45331] server.Server(415): Started @35426ms 2023-07-23 22:10:47,698 INFO [Listener at localhost/45331] master.HMaster(444): hbase.rootdir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f, hbase.cluster.distributed=false 2023-07-23 22:10:47,716 INFO [Listener at localhost/45331] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:47,717 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:47,717 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:47,717 INFO [Listener at localhost/45331] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:47,717 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:47,717 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:47,717 INFO [Listener at localhost/45331] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:47,720 INFO [Listener at localhost/45331] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35669 2023-07-23 22:10:47,720 INFO [Listener at localhost/45331] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:47,721 DEBUG [Listener at localhost/45331] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:47,722 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:47,723 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:47,724 INFO [Listener at localhost/45331] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35669 connecting to ZooKeeper ensemble=127.0.0.1:61961 2023-07-23 22:10:47,731 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:356690x0, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:47,732 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35669-0x101943c9f3f0001 connected 2023-07-23 22:10:47,732 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:47,733 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:47,734 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:47,737 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35669 2023-07-23 22:10:47,737 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35669 2023-07-23 22:10:47,737 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35669 2023-07-23 22:10:47,738 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35669 2023-07-23 22:10:47,741 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35669 2023-07-23 22:10:47,743 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:47,743 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:47,743 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:47,744 INFO [Listener at localhost/45331] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:47,744 INFO [Listener at localhost/45331] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:47,744 INFO [Listener at localhost/45331] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:47,744 INFO [Listener at localhost/45331] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:47,745 INFO [Listener at localhost/45331] http.HttpServer(1146): Jetty bound to port 41523 2023-07-23 22:10:47,745 INFO [Listener at localhost/45331] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:47,747 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:47,747 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@70173995{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:47,748 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:47,748 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@29678fc3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:47,864 INFO [Listener at localhost/45331] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:47,865 INFO [Listener at localhost/45331] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:47,865 INFO [Listener at localhost/45331] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:47,866 INFO [Listener at localhost/45331] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 22:10:47,871 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:47,872 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@74358d7c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/java.io.tmpdir/jetty-0_0_0_0-41523-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5731010543921767130/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:47,873 INFO [Listener at localhost/45331] server.AbstractConnector(333): Started ServerConnector@735db335{HTTP/1.1, (http/1.1)}{0.0.0.0:41523} 2023-07-23 22:10:47,874 INFO [Listener at localhost/45331] server.Server(415): Started @35602ms 2023-07-23 22:10:47,889 INFO [Listener at localhost/45331] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:47,889 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:47,889 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:47,889 INFO [Listener at localhost/45331] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:47,889 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:47,890 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:47,890 INFO [Listener at localhost/45331] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:47,891 INFO [Listener at localhost/45331] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34225 2023-07-23 22:10:47,891 INFO [Listener at localhost/45331] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:47,895 DEBUG [Listener at localhost/45331] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:47,896 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:47,897 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:47,899 INFO [Listener at localhost/45331] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34225 connecting to ZooKeeper ensemble=127.0.0.1:61961 2023-07-23 22:10:47,903 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:342250x0, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:47,905 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34225-0x101943c9f3f0002 connected 2023-07-23 22:10:47,905 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:47,906 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:47,906 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:47,907 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34225 2023-07-23 22:10:47,907 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34225 2023-07-23 22:10:47,908 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34225 2023-07-23 22:10:47,913 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34225 2023-07-23 22:10:47,913 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34225 2023-07-23 22:10:47,915 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:47,915 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:47,915 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:47,915 INFO [Listener at localhost/45331] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:47,915 INFO [Listener at localhost/45331] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:47,915 INFO [Listener at localhost/45331] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:47,916 INFO [Listener at localhost/45331] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:47,916 INFO [Listener at localhost/45331] http.HttpServer(1146): Jetty bound to port 37353 2023-07-23 22:10:47,916 INFO [Listener at localhost/45331] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:47,920 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:47,920 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d6c9f9c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:47,920 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:47,920 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4085c82a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:48,038 INFO [Listener at localhost/45331] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:48,040 INFO [Listener at localhost/45331] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:48,040 INFO [Listener at localhost/45331] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:48,041 INFO [Listener at localhost/45331] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 22:10:48,043 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:48,044 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7f94613e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/java.io.tmpdir/jetty-0_0_0_0-37353-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5439431414629261521/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:48,046 INFO [Listener at localhost/45331] server.AbstractConnector(333): Started ServerConnector@3cc0e5fa{HTTP/1.1, (http/1.1)}{0.0.0.0:37353} 2023-07-23 22:10:48,046 INFO [Listener at localhost/45331] server.Server(415): Started @35775ms 2023-07-23 22:10:48,064 INFO [Listener at localhost/45331] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:48,064 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:48,065 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:48,065 INFO [Listener at localhost/45331] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:48,065 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:48,065 INFO [Listener at localhost/45331] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:48,065 INFO [Listener at localhost/45331] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:48,066 INFO [Listener at localhost/45331] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36279 2023-07-23 22:10:48,067 INFO [Listener at localhost/45331] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:48,068 DEBUG [Listener at localhost/45331] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:48,069 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:48,070 INFO [Listener at localhost/45331] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:48,071 INFO [Listener at localhost/45331] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36279 connecting to ZooKeeper ensemble=127.0.0.1:61961 2023-07-23 22:10:48,092 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:362790x0, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:48,092 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): regionserver:362790x0, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:48,103 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): regionserver:362790x0, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:48,104 DEBUG [Listener at localhost/45331] zookeeper.ZKUtil(164): regionserver:362790x0, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:48,108 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36279-0x101943c9f3f0003 connected 2023-07-23 22:10:48,108 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36279 2023-07-23 22:10:48,108 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36279 2023-07-23 22:10:48,109 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36279 2023-07-23 22:10:48,110 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36279 2023-07-23 22:10:48,110 DEBUG [Listener at localhost/45331] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36279 2023-07-23 22:10:48,113 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:48,113 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:48,113 INFO [Listener at localhost/45331] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:48,114 INFO [Listener at localhost/45331] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:48,114 INFO [Listener at localhost/45331] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:48,114 INFO [Listener at localhost/45331] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:48,115 INFO [Listener at localhost/45331] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:48,116 INFO [Listener at localhost/45331] http.HttpServer(1146): Jetty bound to port 37697 2023-07-23 22:10:48,116 INFO [Listener at localhost/45331] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:48,117 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:48,117 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4208e963{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:48,118 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:48,118 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c6b909d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:48,249 INFO [Listener at localhost/45331] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:48,250 INFO [Listener at localhost/45331] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:48,250 INFO [Listener at localhost/45331] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:48,251 INFO [Listener at localhost/45331] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 22:10:48,253 INFO [Listener at localhost/45331] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:48,254 INFO [Listener at localhost/45331] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2953a5e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/java.io.tmpdir/jetty-0_0_0_0-37697-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5014402497353831529/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:48,258 INFO [Listener at localhost/45331] server.AbstractConnector(333): Started ServerConnector@2296d378{HTTP/1.1, (http/1.1)}{0.0.0.0:37697} 2023-07-23 22:10:48,258 INFO [Listener at localhost/45331] server.Server(415): Started @35987ms 2023-07-23 22:10:48,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:48,277 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@4d9956c9{HTTP/1.1, (http/1.1)}{0.0.0.0:40805} 2023-07-23 22:10:48,278 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @36006ms 2023-07-23 22:10:48,278 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:48,279 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 22:10:48,280 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:48,281 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:48,281 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 22:10:48,281 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 22:10:48,282 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:48,282 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:48,282 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:48,283 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:48,283 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 22:10:48,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,35043,1690150247517 from backup master directory 2023-07-23 22:10:48,285 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:48,285 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:48,285 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 22:10:48,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:48,288 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 22:10:48,288 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:48,324 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/hbase.id with ID: 33d22f97-187a-4d5f-bc7b-9f1a6b7ee9c0 2023-07-23 22:10:48,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:48,341 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:48,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1ec26e6a to 127.0.0.1:61961 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:48,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77680f7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:48,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:48,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 22:10:48,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:48,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/data/master/store-tmp 2023-07-23 22:10:48,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:48,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 22:10:48,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:48,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:48,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 22:10:48,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:48,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:48,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 22:10:48,377 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/WALs/jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:48,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35043%2C1690150247517, suffix=, logDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/WALs/jenkins-hbase4.apache.org,35043,1690150247517, archiveDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/oldWALs, maxLogs=10 2023-07-23 22:10:48,398 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK] 2023-07-23 22:10:48,398 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK] 2023-07-23 22:10:48,398 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK] 2023-07-23 22:10:48,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/WALs/jenkins-hbase4.apache.org,35043,1690150247517/jenkins-hbase4.apache.org%2C35043%2C1690150247517.1690150248380 2023-07-23 22:10:48,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK], DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK], DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK]] 2023-07-23 22:10:48,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:48,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:48,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:48,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:48,407 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:48,409 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 22:10:48,409 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 22:10:48,410 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:48,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:48,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:48,414 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:48,418 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:48,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10539734080, jitterRate=-0.01841077208518982}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:48,419 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 22:10:48,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 22:10:48,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 22:10:48,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 22:10:48,420 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 22:10:48,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-23 22:10:48,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-23 22:10:48,421 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 22:10:48,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 22:10:48,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 22:10:48,424 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 22:10:48,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 22:10:48,425 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 22:10:48,428 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:48,429 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 22:10:48,429 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 22:10:48,430 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 22:10:48,432 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:48,432 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:48,432 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:48,432 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:48,432 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:48,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,35043,1690150247517, sessionid=0x101943c9f3f0000, setting cluster-up flag (Was=false) 2023-07-23 22:10:48,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 22:10:48,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:48,454 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:48,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 22:10:48,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:48,469 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.hbase-snapshot/.tmp 2023-07-23 22:10:48,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 22:10:48,473 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 22:10:48,474 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 22:10:48,474 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:48,474 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 22:10:48,475 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-23 22:10:48,476 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 22:10:48,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 22:10:48,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 22:10:48,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 22:10:48,488 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 22:10:48,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:48,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:48,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:48,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:48,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 22:10:48,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:48,488 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690150278498 2023-07-23 22:10:48,498 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 22:10:48,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 22:10:48,498 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 22:10:48,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 22:10:48,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 22:10:48,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 22:10:48,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 22:10:48,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 22:10:48,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 22:10:48,499 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 22:10:48,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 22:10:48,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 22:10:48,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 22:10:48,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150248500,5,FailOnTimeoutGroup] 2023-07-23 22:10:48,500 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:48,501 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150248500,5,FailOnTimeoutGroup] 2023-07-23 22:10:48,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 22:10:48,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,512 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:48,512 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:48,512 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f 2023-07-23 22:10:48,521 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:48,523 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 22:10:48,524 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/info 2023-07-23 22:10:48,524 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 22:10:48,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:48,525 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 22:10:48,526 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/rep_barrier 2023-07-23 22:10:48,526 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 22:10:48,526 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:48,527 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 22:10:48,528 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/table 2023-07-23 22:10:48,528 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 22:10:48,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:48,529 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740 2023-07-23 22:10:48,529 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740 2023-07-23 22:10:48,531 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 22:10:48,532 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 22:10:48,534 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:48,534 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10066240640, jitterRate=-0.06250828504562378}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 22:10:48,535 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 22:10:48,535 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 22:10:48,535 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 22:10:48,535 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 22:10:48,535 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 22:10:48,535 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 22:10:48,535 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 22:10:48,535 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 22:10:48,536 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 22:10:48,536 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 22:10:48,536 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 22:10:48,539 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 22:10:48,540 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 22:10:48,561 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(951): ClusterId : 33d22f97-187a-4d5f-bc7b-9f1a6b7ee9c0 2023-07-23 22:10:48,561 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(951): ClusterId : 33d22f97-187a-4d5f-bc7b-9f1a6b7ee9c0 2023-07-23 22:10:48,562 DEBUG [RS:0;jenkins-hbase4:35669] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:48,564 DEBUG [RS:2;jenkins-hbase4:36279] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:48,561 INFO [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(951): ClusterId : 33d22f97-187a-4d5f-bc7b-9f1a6b7ee9c0 2023-07-23 22:10:48,564 DEBUG [RS:1;jenkins-hbase4:34225] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:48,566 DEBUG [RS:2;jenkins-hbase4:36279] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:48,566 DEBUG [RS:1;jenkins-hbase4:34225] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:48,566 DEBUG [RS:1;jenkins-hbase4:34225] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:48,566 DEBUG [RS:2;jenkins-hbase4:36279] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:48,567 DEBUG [RS:0;jenkins-hbase4:35669] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:48,567 DEBUG [RS:0;jenkins-hbase4:35669] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:48,569 DEBUG [RS:1;jenkins-hbase4:34225] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:48,570 DEBUG [RS:2;jenkins-hbase4:36279] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:48,572 DEBUG [RS:0;jenkins-hbase4:35669] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:48,572 DEBUG [RS:1;jenkins-hbase4:34225] zookeeper.ReadOnlyZKClient(139): Connect 0x4b618f64 to 127.0.0.1:61961 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:48,572 DEBUG [RS:2;jenkins-hbase4:36279] zookeeper.ReadOnlyZKClient(139): Connect 0x1b1852f9 to 127.0.0.1:61961 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:48,574 DEBUG [RS:0;jenkins-hbase4:35669] zookeeper.ReadOnlyZKClient(139): Connect 0x1de14574 to 127.0.0.1:61961 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:48,583 DEBUG [RS:1;jenkins-hbase4:34225] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45a66878, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:48,583 DEBUG [RS:0;jenkins-hbase4:35669] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1326b8b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:48,584 DEBUG [RS:2;jenkins-hbase4:36279] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11e03989, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:48,584 DEBUG [RS:0;jenkins-hbase4:35669] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@521bb283, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:48,584 DEBUG [RS:1;jenkins-hbase4:34225] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@49b6243, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:48,584 DEBUG [RS:2;jenkins-hbase4:36279] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54e2b5d1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:48,594 DEBUG [RS:0;jenkins-hbase4:35669] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35669 2023-07-23 22:10:48,594 DEBUG [RS:1;jenkins-hbase4:34225] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34225 2023-07-23 22:10:48,594 INFO [RS:0;jenkins-hbase4:35669] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:48,594 INFO [RS:1;jenkins-hbase4:34225] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:48,594 INFO [RS:1;jenkins-hbase4:34225] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:48,594 DEBUG [RS:2;jenkins-hbase4:36279] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36279 2023-07-23 22:10:48,594 DEBUG [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:48,594 INFO [RS:2;jenkins-hbase4:36279] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:48,594 INFO [RS:2;jenkins-hbase4:36279] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:48,594 INFO [RS:0;jenkins-hbase4:35669] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:48,594 DEBUG [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:48,594 DEBUG [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:48,594 INFO [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35043,1690150247517 with isa=jenkins-hbase4.apache.org/172.31.14.131:34225, startcode=1690150247888 2023-07-23 22:10:48,595 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35043,1690150247517 with isa=jenkins-hbase4.apache.org/172.31.14.131:36279, startcode=1690150248063 2023-07-23 22:10:48,595 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,35043,1690150247517 with isa=jenkins-hbase4.apache.org/172.31.14.131:35669, startcode=1690150247716 2023-07-23 22:10:48,595 DEBUG [RS:2;jenkins-hbase4:36279] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:48,595 DEBUG [RS:0;jenkins-hbase4:35669] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:48,595 DEBUG [RS:1;jenkins-hbase4:34225] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:48,597 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38981, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:48,597 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42013, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:48,597 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55609, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:48,598 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35043] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,598 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:48,599 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 22:10:48,599 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35043] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:48,599 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:48,599 DEBUG [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f 2023-07-23 22:10:48,599 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 22:10:48,599 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35043] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:48,599 DEBUG [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45671 2023-07-23 22:10:48,599 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:48,599 DEBUG [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f 2023-07-23 22:10:48,600 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 22:10:48,600 DEBUG [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39653 2023-07-23 22:10:48,600 DEBUG [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f 2023-07-23 22:10:48,600 DEBUG [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45671 2023-07-23 22:10:48,600 DEBUG [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45671 2023-07-23 22:10:48,600 DEBUG [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39653 2023-07-23 22:10:48,600 DEBUG [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39653 2023-07-23 22:10:48,601 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:48,610 DEBUG [RS:2;jenkins-hbase4:36279] zookeeper.ZKUtil(162): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,610 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36279,1690150248063] 2023-07-23 22:10:48,610 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35669,1690150247716] 2023-07-23 22:10:48,610 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34225,1690150247888] 2023-07-23 22:10:48,610 WARN [RS:2;jenkins-hbase4:36279] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:48,610 DEBUG [RS:1;jenkins-hbase4:34225] zookeeper.ZKUtil(162): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:48,610 INFO [RS:2;jenkins-hbase4:36279] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:48,610 DEBUG [RS:0;jenkins-hbase4:35669] zookeeper.ZKUtil(162): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:48,610 DEBUG [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,610 WARN [RS:0;jenkins-hbase4:35669] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:48,610 WARN [RS:1;jenkins-hbase4:34225] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:48,610 INFO [RS:0;jenkins-hbase4:35669] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:48,611 INFO [RS:1;jenkins-hbase4:34225] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:48,611 DEBUG [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:48,611 DEBUG [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:48,617 DEBUG [RS:1;jenkins-hbase4:34225] zookeeper.ZKUtil(162): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,617 DEBUG [RS:2;jenkins-hbase4:36279] zookeeper.ZKUtil(162): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,618 DEBUG [RS:1;jenkins-hbase4:34225] zookeeper.ZKUtil(162): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:48,618 DEBUG [RS:0;jenkins-hbase4:35669] zookeeper.ZKUtil(162): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,619 DEBUG [RS:1;jenkins-hbase4:34225] zookeeper.ZKUtil(162): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:48,623 DEBUG [RS:1;jenkins-hbase4:34225] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:48,623 DEBUG [RS:2;jenkins-hbase4:36279] zookeeper.ZKUtil(162): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:48,623 DEBUG [RS:0;jenkins-hbase4:35669] zookeeper.ZKUtil(162): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:48,623 INFO [RS:1;jenkins-hbase4:34225] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:48,623 DEBUG [RS:2;jenkins-hbase4:36279] zookeeper.ZKUtil(162): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:48,623 DEBUG [RS:0;jenkins-hbase4:35669] zookeeper.ZKUtil(162): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:48,624 DEBUG [RS:0;jenkins-hbase4:35669] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:48,624 DEBUG [RS:2;jenkins-hbase4:36279] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:48,625 INFO [RS:2;jenkins-hbase4:36279] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:48,625 INFO [RS:0;jenkins-hbase4:35669] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:48,626 INFO [RS:1;jenkins-hbase4:34225] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:48,629 INFO [RS:1;jenkins-hbase4:34225] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:48,630 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,630 INFO [RS:0;jenkins-hbase4:35669] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:48,630 INFO [RS:2;jenkins-hbase4:36279] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:48,630 INFO [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:48,635 INFO [RS:0;jenkins-hbase4:35669] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:48,635 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,635 INFO [RS:2;jenkins-hbase4:36279] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:48,635 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,635 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:48,636 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:48,638 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,638 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,638 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,638 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,638 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,639 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,638 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:48,639 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:2;jenkins-hbase4:36279] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,639 DEBUG [RS:1;jenkins-hbase4:34225] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,640 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,640 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,640 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:48,640 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,640 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,641 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,641 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,641 DEBUG [RS:0;jenkins-hbase4:35669] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:48,641 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,641 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,641 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,643 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,643 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,643 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,643 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,644 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,644 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,644 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,644 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,656 INFO [RS:1;jenkins-hbase4:34225] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:48,656 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34225,1690150247888-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,657 INFO [RS:2;jenkins-hbase4:36279] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:48,657 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36279,1690150248063-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,666 INFO [RS:0;jenkins-hbase4:35669] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:48,666 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35669,1690150247716-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,670 INFO [RS:1;jenkins-hbase4:34225] regionserver.Replication(203): jenkins-hbase4.apache.org,34225,1690150247888 started 2023-07-23 22:10:48,670 INFO [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34225,1690150247888, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34225, sessionid=0x101943c9f3f0002 2023-07-23 22:10:48,671 DEBUG [RS:1;jenkins-hbase4:34225] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:48,671 DEBUG [RS:1;jenkins-hbase4:34225] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:48,671 DEBUG [RS:1;jenkins-hbase4:34225] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34225,1690150247888' 2023-07-23 22:10:48,671 DEBUG [RS:1;jenkins-hbase4:34225] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:48,671 DEBUG [RS:1;jenkins-hbase4:34225] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:48,672 DEBUG [RS:1;jenkins-hbase4:34225] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:48,672 DEBUG [RS:1;jenkins-hbase4:34225] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:48,672 DEBUG [RS:1;jenkins-hbase4:34225] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:48,672 DEBUG [RS:1;jenkins-hbase4:34225] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34225,1690150247888' 2023-07-23 22:10:48,672 DEBUG [RS:1;jenkins-hbase4:34225] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:48,672 DEBUG [RS:1;jenkins-hbase4:34225] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:48,672 DEBUG [RS:1;jenkins-hbase4:34225] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:48,673 INFO [RS:1;jenkins-hbase4:34225] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 22:10:48,674 INFO [RS:2;jenkins-hbase4:36279] regionserver.Replication(203): jenkins-hbase4.apache.org,36279,1690150248063 started 2023-07-23 22:10:48,674 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36279,1690150248063, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36279, sessionid=0x101943c9f3f0003 2023-07-23 22:10:48,675 DEBUG [RS:2;jenkins-hbase4:36279] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:48,675 DEBUG [RS:2;jenkins-hbase4:36279] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,675 DEBUG [RS:2;jenkins-hbase4:36279] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36279,1690150248063' 2023-07-23 22:10:48,675 DEBUG [RS:2;jenkins-hbase4:36279] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:48,675 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,675 DEBUG [RS:2;jenkins-hbase4:36279] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:48,676 DEBUG [RS:1;jenkins-hbase4:34225] zookeeper.ZKUtil(398): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 22:10:48,676 DEBUG [RS:2;jenkins-hbase4:36279] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:48,676 INFO [RS:1;jenkins-hbase4:34225] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 22:10:48,676 DEBUG [RS:2;jenkins-hbase4:36279] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:48,676 DEBUG [RS:2;jenkins-hbase4:36279] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,676 DEBUG [RS:2;jenkins-hbase4:36279] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36279,1690150248063' 2023-07-23 22:10:48,676 DEBUG [RS:2;jenkins-hbase4:36279] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:48,676 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,676 DEBUG [RS:2;jenkins-hbase4:36279] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:48,677 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,677 DEBUG [RS:2;jenkins-hbase4:36279] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:48,677 INFO [RS:2;jenkins-hbase4:36279] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 22:10:48,677 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,678 DEBUG [RS:2;jenkins-hbase4:36279] zookeeper.ZKUtil(398): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 22:10:48,678 INFO [RS:2;jenkins-hbase4:36279] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 22:10:48,678 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,678 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,681 INFO [RS:0;jenkins-hbase4:35669] regionserver.Replication(203): jenkins-hbase4.apache.org,35669,1690150247716 started 2023-07-23 22:10:48,681 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35669,1690150247716, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35669, sessionid=0x101943c9f3f0001 2023-07-23 22:10:48,681 DEBUG [RS:0;jenkins-hbase4:35669] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:48,681 DEBUG [RS:0;jenkins-hbase4:35669] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:48,681 DEBUG [RS:0;jenkins-hbase4:35669] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35669,1690150247716' 2023-07-23 22:10:48,681 DEBUG [RS:0;jenkins-hbase4:35669] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:48,681 DEBUG [RS:0;jenkins-hbase4:35669] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:48,682 DEBUG [RS:0;jenkins-hbase4:35669] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:48,682 DEBUG [RS:0;jenkins-hbase4:35669] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:48,682 DEBUG [RS:0;jenkins-hbase4:35669] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:48,682 DEBUG [RS:0;jenkins-hbase4:35669] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35669,1690150247716' 2023-07-23 22:10:48,682 DEBUG [RS:0;jenkins-hbase4:35669] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:48,683 DEBUG [RS:0;jenkins-hbase4:35669] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:48,683 DEBUG [RS:0;jenkins-hbase4:35669] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:48,683 INFO [RS:0;jenkins-hbase4:35669] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 22:10:48,683 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,684 DEBUG [RS:0;jenkins-hbase4:35669] zookeeper.ZKUtil(398): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 22:10:48,684 INFO [RS:0;jenkins-hbase4:35669] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 22:10:48,684 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,684 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,690 DEBUG [jenkins-hbase4:35043] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 22:10:48,690 DEBUG [jenkins-hbase4:35043] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:48,690 DEBUG [jenkins-hbase4:35043] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:48,690 DEBUG [jenkins-hbase4:35043] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:48,690 DEBUG [jenkins-hbase4:35043] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:48,690 DEBUG [jenkins-hbase4:35043] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:48,691 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36279,1690150248063, state=OPENING 2023-07-23 22:10:48,693 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 22:10:48,694 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:48,695 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36279,1690150248063}] 2023-07-23 22:10:48,695 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 22:10:48,781 INFO [RS:2;jenkins-hbase4:36279] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36279%2C1690150248063, suffix=, logDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,36279,1690150248063, archiveDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/oldWALs, maxLogs=32 2023-07-23 22:10:48,781 INFO [RS:1;jenkins-hbase4:34225] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34225%2C1690150247888, suffix=, logDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,34225,1690150247888, archiveDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/oldWALs, maxLogs=32 2023-07-23 22:10:48,786 WARN [ReadOnlyZKClient-127.0.0.1:61961@0x1ec26e6a] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-23 22:10:48,786 INFO [RS:0;jenkins-hbase4:35669] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35669%2C1690150247716, suffix=, logDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,35669,1690150247716, archiveDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/oldWALs, maxLogs=32 2023-07-23 22:10:48,787 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35043,1690150247517] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:48,788 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35982, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:48,789 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36279] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:35982 deadline: 1690150308788, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,815 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK] 2023-07-23 22:10:48,815 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK] 2023-07-23 22:10:48,816 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK] 2023-07-23 22:10:48,825 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK] 2023-07-23 22:10:48,825 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK] 2023-07-23 22:10:48,825 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK] 2023-07-23 22:10:48,832 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK] 2023-07-23 22:10:48,833 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK] 2023-07-23 22:10:48,833 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK] 2023-07-23 22:10:48,835 INFO [RS:1;jenkins-hbase4:34225] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,34225,1690150247888/jenkins-hbase4.apache.org%2C34225%2C1690150247888.1690150248782 2023-07-23 22:10:48,838 DEBUG [RS:1;jenkins-hbase4:34225] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK], DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK], DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK]] 2023-07-23 22:10:48,839 INFO [RS:2;jenkins-hbase4:36279] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,36279,1690150248063/jenkins-hbase4.apache.org%2C36279%2C1690150248063.1690150248782 2023-07-23 22:10:48,842 INFO [RS:0;jenkins-hbase4:35669] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,35669,1690150247716/jenkins-hbase4.apache.org%2C35669%2C1690150247716.1690150248787 2023-07-23 22:10:48,842 DEBUG [RS:2;jenkins-hbase4:36279] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK], DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK], DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK]] 2023-07-23 22:10:48,843 DEBUG [RS:0;jenkins-hbase4:35669] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK], DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK], DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK]] 2023-07-23 22:10:48,850 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:48,852 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:48,854 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35988, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:48,861 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 22:10:48,861 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:48,863 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36279%2C1690150248063.meta, suffix=.meta, logDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,36279,1690150248063, archiveDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/oldWALs, maxLogs=32 2023-07-23 22:10:48,883 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK] 2023-07-23 22:10:48,883 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK] 2023-07-23 22:10:48,883 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK] 2023-07-23 22:10:48,886 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/WALs/jenkins-hbase4.apache.org,36279,1690150248063/jenkins-hbase4.apache.org%2C36279%2C1690150248063.meta.1690150248864.meta 2023-07-23 22:10:48,890 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35991,DS-32c84afb-6fa2-4001-8e7e-5dc62d315bc3,DISK], DatanodeInfoWithStorage[127.0.0.1:34037,DS-3a45eee8-df8f-41f5-9c20-db8e8cf3353e,DISK], DatanodeInfoWithStorage[127.0.0.1:42979,DS-649afdef-5689-49d4-b144-700c1d2d2477,DISK]] 2023-07-23 22:10:48,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:48,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 22:10:48,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 22:10:48,891 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 22:10:48,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 22:10:48,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:48,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 22:10:48,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 22:10:48,893 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 22:10:48,894 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/info 2023-07-23 22:10:48,894 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/info 2023-07-23 22:10:48,895 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 22:10:48,896 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:48,896 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 22:10:48,897 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/rep_barrier 2023-07-23 22:10:48,897 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/rep_barrier 2023-07-23 22:10:48,897 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 22:10:48,898 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:48,898 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 22:10:48,899 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/table 2023-07-23 22:10:48,899 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/table 2023-07-23 22:10:48,899 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 22:10:48,900 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:48,901 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740 2023-07-23 22:10:48,902 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740 2023-07-23 22:10:48,904 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 22:10:48,906 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 22:10:48,906 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11446979200, jitterRate=0.06608301401138306}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 22:10:48,907 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 22:10:48,907 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690150248850 2023-07-23 22:10:48,912 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 22:10:48,912 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 22:10:48,913 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36279,1690150248063, state=OPEN 2023-07-23 22:10:48,919 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 22:10:48,919 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 22:10:48,921 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 22:10:48,921 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36279,1690150248063 in 224 msec 2023-07-23 22:10:48,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 22:10:48,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 385 msec 2023-07-23 22:10:48,928 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 451 msec 2023-07-23 22:10:48,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690150248928, completionTime=-1 2023-07-23 22:10:48,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 22:10:48,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 22:10:48,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 22:10:48,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690150308937 2023-07-23 22:10:48,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690150368937 2023-07-23 22:10:48,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 8 msec 2023-07-23 22:10:48,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35043,1690150247517-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35043,1690150247517-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35043,1690150247517-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:35043, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:48,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 22:10:48,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:48,945 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 22:10:48,948 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 22:10:48,948 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:48,949 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:48,951 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:48,951 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c empty. 2023-07-23 22:10:48,952 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:48,952 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 22:10:49,002 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:49,004 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 19818ecd19aa7b4949319d04b52e7d9c, NAME => 'hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp 2023-07-23 22:10:49,024 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:49,024 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 19818ecd19aa7b4949319d04b52e7d9c, disabling compactions & flushes 2023-07-23 22:10:49,024 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:49,024 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:49,024 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. after waiting 0 ms 2023-07-23 22:10:49,024 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:49,024 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:49,024 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 19818ecd19aa7b4949319d04b52e7d9c: 2023-07-23 22:10:49,027 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:49,028 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690150249028"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150249028"}]},"ts":"1690150249028"} 2023-07-23 22:10:49,031 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:49,032 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:49,032 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150249032"}]},"ts":"1690150249032"} 2023-07-23 22:10:49,034 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 22:10:49,038 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:49,038 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:49,038 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:49,038 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:49,038 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:49,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=19818ecd19aa7b4949319d04b52e7d9c, ASSIGN}] 2023-07-23 22:10:49,040 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=19818ecd19aa7b4949319d04b52e7d9c, ASSIGN 2023-07-23 22:10:49,041 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=19818ecd19aa7b4949319d04b52e7d9c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36279,1690150248063; forceNewPlan=false, retain=false 2023-07-23 22:10:49,092 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35043,1690150247517] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:49,094 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35043,1690150247517] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 22:10:49,096 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:49,097 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:49,099 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:49,099 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01 empty. 2023-07-23 22:10:49,100 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:49,100 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 22:10:49,112 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:49,113 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1ff03168c527c73a7fdb4c4e5ed72b01, NAME => 'hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp 2023-07-23 22:10:49,123 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:49,124 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 1ff03168c527c73a7fdb4c4e5ed72b01, disabling compactions & flushes 2023-07-23 22:10:49,124 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:49,124 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:49,124 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. after waiting 0 ms 2023-07-23 22:10:49,124 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:49,124 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:49,124 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 1ff03168c527c73a7fdb4c4e5ed72b01: 2023-07-23 22:10:49,126 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:49,128 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150249127"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150249127"}]},"ts":"1690150249127"} 2023-07-23 22:10:49,129 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:49,130 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:49,130 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150249130"}]},"ts":"1690150249130"} 2023-07-23 22:10:49,131 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 22:10:49,134 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:49,134 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:49,134 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:49,134 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:49,134 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:49,134 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=1ff03168c527c73a7fdb4c4e5ed72b01, ASSIGN}] 2023-07-23 22:10:49,135 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=1ff03168c527c73a7fdb4c4e5ed72b01, ASSIGN 2023-07-23 22:10:49,135 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=1ff03168c527c73a7fdb4c4e5ed72b01, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35669,1690150247716; forceNewPlan=false, retain=false 2023-07-23 22:10:49,136 INFO [jenkins-hbase4:35043] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-23 22:10:49,138 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=19818ecd19aa7b4949319d04b52e7d9c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:49,138 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690150249137"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150249137"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150249137"}]},"ts":"1690150249137"} 2023-07-23 22:10:49,138 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=1ff03168c527c73a7fdb4c4e5ed72b01, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:49,138 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150249138"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150249138"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150249138"}]},"ts":"1690150249138"} 2023-07-23 22:10:49,139 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 19818ecd19aa7b4949319d04b52e7d9c, server=jenkins-hbase4.apache.org,36279,1690150248063}] 2023-07-23 22:10:49,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 1ff03168c527c73a7fdb4c4e5ed72b01, server=jenkins-hbase4.apache.org,35669,1690150247716}] 2023-07-23 22:10:49,295 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:49,295 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:49,297 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51998, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:49,297 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:49,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 19818ecd19aa7b4949319d04b52e7d9c, NAME => 'hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:49,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:49,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:49,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:49,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:49,301 INFO [StoreOpener-19818ecd19aa7b4949319d04b52e7d9c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:49,301 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:49,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1ff03168c527c73a7fdb4c4e5ed72b01, NAME => 'hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:49,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 22:10:49,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. service=MultiRowMutationService 2023-07-23 22:10:49,302 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 22:10:49,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:49,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:49,302 DEBUG [StoreOpener-19818ecd19aa7b4949319d04b52e7d9c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c/info 2023-07-23 22:10:49,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:49,302 DEBUG [StoreOpener-19818ecd19aa7b4949319d04b52e7d9c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c/info 2023-07-23 22:10:49,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:49,303 INFO [StoreOpener-19818ecd19aa7b4949319d04b52e7d9c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 19818ecd19aa7b4949319d04b52e7d9c columnFamilyName info 2023-07-23 22:10:49,304 INFO [StoreOpener-19818ecd19aa7b4949319d04b52e7d9c-1] regionserver.HStore(310): Store=19818ecd19aa7b4949319d04b52e7d9c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:49,304 INFO [StoreOpener-1ff03168c527c73a7fdb4c4e5ed72b01-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:49,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:49,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:49,306 DEBUG [StoreOpener-1ff03168c527c73a7fdb4c4e5ed72b01-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01/m 2023-07-23 22:10:49,306 DEBUG [StoreOpener-1ff03168c527c73a7fdb4c4e5ed72b01-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01/m 2023-07-23 22:10:49,306 INFO [StoreOpener-1ff03168c527c73a7fdb4c4e5ed72b01-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1ff03168c527c73a7fdb4c4e5ed72b01 columnFamilyName m 2023-07-23 22:10:49,307 INFO [StoreOpener-1ff03168c527c73a7fdb4c4e5ed72b01-1] regionserver.HStore(310): Store=1ff03168c527c73a7fdb4c4e5ed72b01/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:49,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:49,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:49,309 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:49,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:49,320 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:49,321 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 19818ecd19aa7b4949319d04b52e7d9c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11192117120, jitterRate=0.04234713315963745}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:49,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 19818ecd19aa7b4949319d04b52e7d9c: 2023-07-23 22:10:49,322 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c., pid=8, masterSystemTime=1690150249292 2023-07-23 22:10:49,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:49,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:49,329 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:49,329 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=19818ecd19aa7b4949319d04b52e7d9c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:49,329 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1ff03168c527c73a7fdb4c4e5ed72b01; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5f4234b3, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:49,330 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1ff03168c527c73a7fdb4c4e5ed72b01: 2023-07-23 22:10:49,330 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690150249329"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150249329"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150249329"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150249329"}]},"ts":"1690150249329"} 2023-07-23 22:10:49,331 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01., pid=9, masterSystemTime=1690150249295 2023-07-23 22:10:49,335 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=1ff03168c527c73a7fdb4c4e5ed72b01, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:49,335 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-23 22:10:49,335 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150249335"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150249335"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150249335"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150249335"}]},"ts":"1690150249335"} 2023-07-23 22:10:49,335 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 19818ecd19aa7b4949319d04b52e7d9c, server=jenkins-hbase4.apache.org,36279,1690150248063 in 194 msec 2023-07-23 22:10:49,337 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:49,338 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:49,339 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-23 22:10:49,339 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=19818ecd19aa7b4949319d04b52e7d9c, ASSIGN in 297 msec 2023-07-23 22:10:49,339 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:49,339 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150249339"}]},"ts":"1690150249339"} 2023-07-23 22:10:49,340 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-23 22:10:49,340 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 1ff03168c527c73a7fdb4c4e5ed72b01, server=jenkins-hbase4.apache.org,35669,1690150247716 in 198 msec 2023-07-23 22:10:49,341 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 22:10:49,344 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-23 22:10:49,344 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=1ff03168c527c73a7fdb4c4e5ed72b01, ASSIGN in 206 msec 2023-07-23 22:10:49,344 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:49,344 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:49,345 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150249344"}]},"ts":"1690150249344"} 2023-07-23 22:10:49,346 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 399 msec 2023-07-23 22:10:49,346 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 22:10:49,346 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 22:10:49,348 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:49,348 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:49,348 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:49,350 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 256 msec 2023-07-23 22:10:49,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 22:10:49,373 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:49,376 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-23 22:10:49,385 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 22:10:49,392 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:49,396 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-07-23 22:10:49,397 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35043,1690150247517] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:49,399 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52010, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:49,403 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 22:10:49,403 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 22:10:49,410 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 22:10:49,413 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:49,413 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:49,413 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 22:10:49,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.128sec 2023-07-23 22:10:49,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-23 22:10:49,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:49,415 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 22:10:49,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-23 22:10:49,415 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-23 22:10:49,417 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,35043,1690150247517] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 22:10:49,417 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:49,418 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:49,419 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-23 22:10:49,419 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/quota/90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,420 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/quota/90b01685cf87b7512396438628b63f31 empty. 2023-07-23 22:10:49,420 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/quota/90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,420 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-23 22:10:49,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-23 22:10:49,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-23 22:10:49,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:49,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:49,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 22:10:49,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 22:10:49,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35043,1690150247517-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 22:10:49,433 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35043,1690150247517-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 22:10:49,437 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 22:10:49,445 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:49,446 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 90b01685cf87b7512396438628b63f31, NAME => 'hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp 2023-07-23 22:10:49,461 DEBUG [Listener at localhost/45331] zookeeper.ReadOnlyZKClient(139): Connect 0x28bc48f0 to 127.0.0.1:61961 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:49,472 DEBUG [Listener at localhost/45331] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2959891b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:49,482 DEBUG [hconnection-0x5c04004-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:49,482 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:49,483 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 90b01685cf87b7512396438628b63f31, disabling compactions & flushes 2023-07-23 22:10:49,483 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:49,483 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:49,483 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. after waiting 0 ms 2023-07-23 22:10:49,483 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:49,483 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:49,483 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 90b01685cf87b7512396438628b63f31: 2023-07-23 22:10:49,485 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35994, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:49,486 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:49,486 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:49,487 INFO [Listener at localhost/45331] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:49,490 DEBUG [Listener at localhost/45331] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 22:10:49,490 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690150249488"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150249488"}]},"ts":"1690150249488"} 2023-07-23 22:10:49,492 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:49,492 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40618, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 22:10:49,493 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:49,493 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150249493"}]},"ts":"1690150249493"} 2023-07-23 22:10:49,495 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-23 22:10:49,496 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 22:10:49,496 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:49,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 22:10:49,498 DEBUG [Listener at localhost/45331] zookeeper.ReadOnlyZKClient(139): Connect 0x0169a71c to 127.0.0.1:61961 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:49,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:49,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:49,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:49,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:49,500 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:49,501 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=90b01685cf87b7512396438628b63f31, ASSIGN}] 2023-07-23 22:10:49,501 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=90b01685cf87b7512396438628b63f31, ASSIGN 2023-07-23 22:10:49,502 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=90b01685cf87b7512396438628b63f31, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36279,1690150248063; forceNewPlan=false, retain=false 2023-07-23 22:10:49,516 DEBUG [Listener at localhost/45331] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ae0755, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:49,516 INFO [Listener at localhost/45331] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:61961 2023-07-23 22:10:49,521 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:49,522 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101943c9f3f000a connected 2023-07-23 22:10:49,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-23 22:10:49,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-23 22:10:49,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-23 22:10:49,543 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:49,604 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 75 msec 2023-07-23 22:10:49,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-23 22:10:49,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:49,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-23 22:10:49,644 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:49,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-23 22:10:49,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 22:10:49,646 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:49,647 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 22:10:49,649 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:49,650 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:49,650 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9 empty. 2023-07-23 22:10:49,651 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:49,651 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-23 22:10:49,652 INFO [jenkins-hbase4:35043] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:49,653 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=90b01685cf87b7512396438628b63f31, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:49,653 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690150249653"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150249653"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150249653"}]},"ts":"1690150249653"} 2023-07-23 22:10:49,655 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 90b01685cf87b7512396438628b63f31, server=jenkins-hbase4.apache.org,36279,1690150248063}] 2023-07-23 22:10:49,663 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:49,665 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7f9a7cb246f1609e21a95d7e08ea7da9, NAME => 'np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp 2023-07-23 22:10:49,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:49,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 7f9a7cb246f1609e21a95d7e08ea7da9, disabling compactions & flushes 2023-07-23 22:10:49,674 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:49,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:49,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. after waiting 0 ms 2023-07-23 22:10:49,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:49,674 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:49,674 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 7f9a7cb246f1609e21a95d7e08ea7da9: 2023-07-23 22:10:49,676 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:49,677 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150249677"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150249677"}]},"ts":"1690150249677"} 2023-07-23 22:10:49,678 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:49,678 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:49,679 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150249678"}]},"ts":"1690150249678"} 2023-07-23 22:10:49,680 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-23 22:10:49,684 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:49,684 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:49,684 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:49,684 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:49,684 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:49,684 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=7f9a7cb246f1609e21a95d7e08ea7da9, ASSIGN}] 2023-07-23 22:10:49,685 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=7f9a7cb246f1609e21a95d7e08ea7da9, ASSIGN 2023-07-23 22:10:49,685 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=7f9a7cb246f1609e21a95d7e08ea7da9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36279,1690150248063; forceNewPlan=false, retain=false 2023-07-23 22:10:49,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 22:10:49,810 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:49,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 90b01685cf87b7512396438628b63f31, NAME => 'hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:49,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:49,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,812 INFO [StoreOpener-90b01685cf87b7512396438628b63f31-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,813 DEBUG [StoreOpener-90b01685cf87b7512396438628b63f31-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/quota/90b01685cf87b7512396438628b63f31/q 2023-07-23 22:10:49,813 DEBUG [StoreOpener-90b01685cf87b7512396438628b63f31-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/quota/90b01685cf87b7512396438628b63f31/q 2023-07-23 22:10:49,814 INFO [StoreOpener-90b01685cf87b7512396438628b63f31-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 90b01685cf87b7512396438628b63f31 columnFamilyName q 2023-07-23 22:10:49,814 INFO [StoreOpener-90b01685cf87b7512396438628b63f31-1] regionserver.HStore(310): Store=90b01685cf87b7512396438628b63f31/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:49,814 INFO [StoreOpener-90b01685cf87b7512396438628b63f31-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,816 DEBUG [StoreOpener-90b01685cf87b7512396438628b63f31-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/quota/90b01685cf87b7512396438628b63f31/u 2023-07-23 22:10:49,816 DEBUG [StoreOpener-90b01685cf87b7512396438628b63f31-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/quota/90b01685cf87b7512396438628b63f31/u 2023-07-23 22:10:49,816 INFO [StoreOpener-90b01685cf87b7512396438628b63f31-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 90b01685cf87b7512396438628b63f31 columnFamilyName u 2023-07-23 22:10:49,816 INFO [StoreOpener-90b01685cf87b7512396438628b63f31-1] regionserver.HStore(310): Store=90b01685cf87b7512396438628b63f31/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:49,817 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/quota/90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/quota/90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-23 22:10:49,820 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 90b01685cf87b7512396438628b63f31 2023-07-23 22:10:49,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/quota/90b01685cf87b7512396438628b63f31/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:49,823 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 90b01685cf87b7512396438628b63f31; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10470452320, jitterRate=-0.024863138794898987}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-23 22:10:49,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 90b01685cf87b7512396438628b63f31: 2023-07-23 22:10:49,824 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31., pid=16, masterSystemTime=1690150249807 2023-07-23 22:10:49,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:49,825 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:49,826 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=90b01685cf87b7512396438628b63f31, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:49,826 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690150249825"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150249825"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150249825"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150249825"}]},"ts":"1690150249825"} 2023-07-23 22:10:49,828 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-23 22:10:49,828 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 90b01685cf87b7512396438628b63f31, server=jenkins-hbase4.apache.org,36279,1690150248063 in 172 msec 2023-07-23 22:10:49,829 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-23 22:10:49,830 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=90b01685cf87b7512396438628b63f31, ASSIGN in 327 msec 2023-07-23 22:10:49,830 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:49,830 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150249830"}]},"ts":"1690150249830"} 2023-07-23 22:10:49,831 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-23 22:10:49,833 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:49,835 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 419 msec 2023-07-23 22:10:49,836 INFO [jenkins-hbase4:35043] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:49,837 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=7f9a7cb246f1609e21a95d7e08ea7da9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:49,837 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150249837"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150249837"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150249837"}]},"ts":"1690150249837"} 2023-07-23 22:10:49,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 7f9a7cb246f1609e21a95d7e08ea7da9, server=jenkins-hbase4.apache.org,36279,1690150248063}] 2023-07-23 22:10:49,932 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-23 22:10:49,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 22:10:49,994 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:49,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7f9a7cb246f1609e21a95d7e08ea7da9, NAME => 'np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:49,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:49,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:49,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:49,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:49,996 INFO [StoreOpener-7f9a7cb246f1609e21a95d7e08ea7da9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:49,997 DEBUG [StoreOpener-7f9a7cb246f1609e21a95d7e08ea7da9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9/fam1 2023-07-23 22:10:49,997 DEBUG [StoreOpener-7f9a7cb246f1609e21a95d7e08ea7da9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9/fam1 2023-07-23 22:10:49,997 INFO [StoreOpener-7f9a7cb246f1609e21a95d7e08ea7da9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7f9a7cb246f1609e21a95d7e08ea7da9 columnFamilyName fam1 2023-07-23 22:10:49,998 INFO [StoreOpener-7f9a7cb246f1609e21a95d7e08ea7da9-1] regionserver.HStore(310): Store=7f9a7cb246f1609e21a95d7e08ea7da9/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:49,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:49,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:50,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:50,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:50,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7f9a7cb246f1609e21a95d7e08ea7da9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12074171200, jitterRate=0.12449482083320618}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:50,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7f9a7cb246f1609e21a95d7e08ea7da9: 2023-07-23 22:10:50,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9., pid=18, masterSystemTime=1690150249989 2023-07-23 22:10:50,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:50,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:50,006 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=7f9a7cb246f1609e21a95d7e08ea7da9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:50,006 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150250006"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150250006"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150250006"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150250006"}]},"ts":"1690150250006"} 2023-07-23 22:10:50,009 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-23 22:10:50,009 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 7f9a7cb246f1609e21a95d7e08ea7da9, server=jenkins-hbase4.apache.org,36279,1690150248063 in 170 msec 2023-07-23 22:10:50,010 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-23 22:10:50,010 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=7f9a7cb246f1609e21a95d7e08ea7da9, ASSIGN in 325 msec 2023-07-23 22:10:50,011 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:50,011 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150250011"}]},"ts":"1690150250011"} 2023-07-23 22:10:50,012 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-23 22:10:50,014 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:50,015 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 373 msec 2023-07-23 22:10:50,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 22:10:50,248 INFO [Listener at localhost/45331] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-23 22:10:50,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:50,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-23 22:10:50,252 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:50,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-23 22:10:50,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 22:10:50,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=20 msec 2023-07-23 22:10:50,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 22:10:50,356 INFO [Listener at localhost/45331] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-23 22:10:50,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:50,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:50,358 INFO [Listener at localhost/45331] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-23 22:10:50,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-23 22:10:50,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-23 22:10:50,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 22:10:50,362 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150250362"}]},"ts":"1690150250362"} 2023-07-23 22:10:50,363 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-23 22:10:50,364 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-23 22:10:50,365 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=7f9a7cb246f1609e21a95d7e08ea7da9, UNASSIGN}] 2023-07-23 22:10:50,365 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=7f9a7cb246f1609e21a95d7e08ea7da9, UNASSIGN 2023-07-23 22:10:50,366 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=7f9a7cb246f1609e21a95d7e08ea7da9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:50,366 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150250366"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150250366"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150250366"}]},"ts":"1690150250366"} 2023-07-23 22:10:50,367 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 7f9a7cb246f1609e21a95d7e08ea7da9, server=jenkins-hbase4.apache.org,36279,1690150248063}] 2023-07-23 22:10:50,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 22:10:50,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:50,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7f9a7cb246f1609e21a95d7e08ea7da9, disabling compactions & flushes 2023-07-23 22:10:50,520 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:50,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:50,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. after waiting 0 ms 2023-07-23 22:10:50,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:50,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:50,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9. 2023-07-23 22:10:50,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7f9a7cb246f1609e21a95d7e08ea7da9: 2023-07-23 22:10:50,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:50,527 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=7f9a7cb246f1609e21a95d7e08ea7da9, regionState=CLOSED 2023-07-23 22:10:50,527 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150250527"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150250527"}]},"ts":"1690150250527"} 2023-07-23 22:10:50,529 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-23 22:10:50,529 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 7f9a7cb246f1609e21a95d7e08ea7da9, server=jenkins-hbase4.apache.org,36279,1690150248063 in 161 msec 2023-07-23 22:10:50,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-23 22:10:50,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=7f9a7cb246f1609e21a95d7e08ea7da9, UNASSIGN in 164 msec 2023-07-23 22:10:50,531 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150250531"}]},"ts":"1690150250531"} 2023-07-23 22:10:50,532 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-23 22:10:50,534 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-23 22:10:50,537 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 175 msec 2023-07-23 22:10:50,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 22:10:50,664 INFO [Listener at localhost/45331] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-23 22:10:50,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-23 22:10:50,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-23 22:10:50,668 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 22:10:50,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-23 22:10:50,668 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 22:10:50,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:50,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 22:10:50,673 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:50,674 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9/fam1, FileablePath, hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9/recovered.edits] 2023-07-23 22:10:50,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-23 22:10:50,680 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9/recovered.edits/4.seqid to hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/archive/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9/recovered.edits/4.seqid 2023-07-23 22:10:50,680 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/.tmp/data/np1/table1/7f9a7cb246f1609e21a95d7e08ea7da9 2023-07-23 22:10:50,680 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-23 22:10:50,683 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 22:10:50,684 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-23 22:10:50,686 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-23 22:10:50,687 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 22:10:50,687 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-23 22:10:50,687 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150250687"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:50,688 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 22:10:50,689 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7f9a7cb246f1609e21a95d7e08ea7da9, NAME => 'np1:table1,,1690150249641.7f9a7cb246f1609e21a95d7e08ea7da9.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 22:10:50,689 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-23 22:10:50,689 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690150250689"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:50,690 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-23 22:10:50,695 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 22:10:50,696 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 30 msec 2023-07-23 22:10:50,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-23 22:10:50,776 INFO [Listener at localhost/45331] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-23 22:10:50,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-23 22:10:50,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-23 22:10:50,789 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 22:10:50,791 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 22:10:50,793 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 22:10:50,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 22:10:50,794 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-23 22:10:50,794 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:50,795 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 22:10:50,796 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 22:10:50,797 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 16 msec 2023-07-23 22:10:50,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35043] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 22:10:50,895 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 22:10:50,895 INFO [Listener at localhost/45331] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 22:10:50,895 DEBUG [Listener at localhost/45331] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x28bc48f0 to 127.0.0.1:61961 2023-07-23 22:10:50,895 DEBUG [Listener at localhost/45331] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:50,895 DEBUG [Listener at localhost/45331] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 22:10:50,896 DEBUG [Listener at localhost/45331] util.JVMClusterUtil(257): Found active master hash=2131942024, stopped=false 2023-07-23 22:10:50,896 DEBUG [Listener at localhost/45331] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 22:10:50,896 DEBUG [Listener at localhost/45331] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 22:10:50,896 DEBUG [Listener at localhost/45331] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-23 22:10:50,896 INFO [Listener at localhost/45331] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:50,897 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:50,897 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:50,897 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:50,897 INFO [Listener at localhost/45331] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 22:10:50,897 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:50,897 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:50,898 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:50,899 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:50,899 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:50,899 DEBUG [Listener at localhost/45331] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ec26e6a to 127.0.0.1:61961 2023-07-23 22:10:50,899 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:50,899 DEBUG [Listener at localhost/45331] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:50,900 INFO [Listener at localhost/45331] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35669,1690150247716' ***** 2023-07-23 22:10:50,900 INFO [Listener at localhost/45331] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:50,900 INFO [Listener at localhost/45331] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34225,1690150247888' ***** 2023-07-23 22:10:50,900 INFO [Listener at localhost/45331] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:50,900 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:50,900 INFO [Listener at localhost/45331] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36279,1690150248063' ***** 2023-07-23 22:10:50,900 INFO [Listener at localhost/45331] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:50,900 INFO [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:50,901 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:50,905 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:50,903 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:50,901 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:50,901 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:50,914 INFO [RS:0;jenkins-hbase4:35669] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@74358d7c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:50,914 INFO [RS:2;jenkins-hbase4:36279] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2953a5e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:50,914 INFO [RS:1;jenkins-hbase4:34225] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7f94613e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:50,914 INFO [RS:0;jenkins-hbase4:35669] server.AbstractConnector(383): Stopped ServerConnector@735db335{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:50,914 INFO [RS:2;jenkins-hbase4:36279] server.AbstractConnector(383): Stopped ServerConnector@2296d378{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:50,914 INFO [RS:0;jenkins-hbase4:35669] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:50,914 INFO [RS:2;jenkins-hbase4:36279] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:50,914 INFO [RS:1;jenkins-hbase4:34225] server.AbstractConnector(383): Stopped ServerConnector@3cc0e5fa{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:50,915 INFO [RS:0;jenkins-hbase4:35669] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@29678fc3{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:50,917 INFO [RS:2;jenkins-hbase4:36279] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c6b909d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:50,917 INFO [RS:0;jenkins-hbase4:35669] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@70173995{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:50,917 INFO [RS:1;jenkins-hbase4:34225] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:50,917 INFO [RS:2;jenkins-hbase4:36279] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4208e963{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:50,917 INFO [RS:1;jenkins-hbase4:34225] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4085c82a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:50,917 INFO [RS:0;jenkins-hbase4:35669] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:50,917 INFO [RS:1;jenkins-hbase4:34225] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d6c9f9c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:50,918 INFO [RS:0;jenkins-hbase4:35669] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:50,918 INFO [RS:2;jenkins-hbase4:36279] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:50,918 INFO [RS:0;jenkins-hbase4:35669] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:50,918 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(3305): Received CLOSE for 1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:50,918 INFO [RS:2;jenkins-hbase4:36279] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:50,918 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:50,918 INFO [RS:2;jenkins-hbase4:36279] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:50,918 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(3305): Received CLOSE for 90b01685cf87b7512396438628b63f31 2023-07-23 22:10:50,918 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(3305): Received CLOSE for 19818ecd19aa7b4949319d04b52e7d9c 2023-07-23 22:10:50,918 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:50,918 DEBUG [RS:2;jenkins-hbase4:36279] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1b1852f9 to 127.0.0.1:61961 2023-07-23 22:10:50,918 DEBUG [RS:2;jenkins-hbase4:36279] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:50,918 INFO [RS:2;jenkins-hbase4:36279] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:50,918 INFO [RS:2;jenkins-hbase4:36279] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:50,918 INFO [RS:2;jenkins-hbase4:36279] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:50,919 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:50,919 DEBUG [RS:0;jenkins-hbase4:35669] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1de14574 to 127.0.0.1:61961 2023-07-23 22:10:50,919 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 22:10:50,919 DEBUG [RS:0;jenkins-hbase4:35669] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:50,919 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 22:10:50,919 DEBUG [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1478): Online Regions={1ff03168c527c73a7fdb4c4e5ed72b01=hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01.} 2023-07-23 22:10:50,919 DEBUG [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1504): Waiting on 1ff03168c527c73a7fdb4c4e5ed72b01 2023-07-23 22:10:50,919 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-23 22:10:50,919 INFO [RS:1;jenkins-hbase4:34225] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:50,921 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 22:10:50,921 INFO [RS:1;jenkins-hbase4:34225] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:50,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1ff03168c527c73a7fdb4c4e5ed72b01, disabling compactions & flushes 2023-07-23 22:10:50,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 90b01685cf87b7512396438628b63f31, disabling compactions & flushes 2023-07-23 22:10:50,919 DEBUG [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1478): Online Regions={90b01685cf87b7512396438628b63f31=hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31., 19818ecd19aa7b4949319d04b52e7d9c=hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c., 1588230740=hbase:meta,,1.1588230740} 2023-07-23 22:10:50,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:50,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:50,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:50,921 INFO [RS:1;jenkins-hbase4:34225] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:50,921 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 22:10:50,921 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:50,921 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 22:10:50,921 INFO [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:50,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. after waiting 0 ms 2023-07-23 22:10:50,922 DEBUG [RS:1;jenkins-hbase4:34225] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b618f64 to 127.0.0.1:61961 2023-07-23 22:10:50,921 DEBUG [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1504): Waiting on 1588230740, 19818ecd19aa7b4949319d04b52e7d9c, 90b01685cf87b7512396438628b63f31 2023-07-23 22:10:50,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:50,922 DEBUG [RS:1;jenkins-hbase4:34225] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:50,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:50,922 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 22:10:50,922 INFO [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34225,1690150247888; all regions closed. 2023-07-23 22:10:50,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. after waiting 0 ms 2023-07-23 22:10:50,923 DEBUG [RS:1;jenkins-hbase4:34225] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 22:10:50,923 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:50,922 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 22:10:50,923 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-23 22:10:50,923 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1ff03168c527c73a7fdb4c4e5ed72b01 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-23 22:10:50,930 DEBUG [RS:1;jenkins-hbase4:34225] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/oldWALs 2023-07-23 22:10:50,930 INFO [RS:1;jenkins-hbase4:34225] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34225%2C1690150247888:(num 1690150248782) 2023-07-23 22:10:50,930 DEBUG [RS:1;jenkins-hbase4:34225] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:50,930 INFO [RS:1;jenkins-hbase4:34225] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:50,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/quota/90b01685cf87b7512396438628b63f31/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:50,931 INFO [RS:1;jenkins-hbase4:34225] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:50,931 INFO [RS:1;jenkins-hbase4:34225] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:50,931 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:50,931 INFO [RS:1;jenkins-hbase4:34225] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:50,931 INFO [RS:1;jenkins-hbase4:34225] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:50,932 INFO [RS:1;jenkins-hbase4:34225] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34225 2023-07-23 22:10:50,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:50,934 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 90b01685cf87b7512396438628b63f31: 2023-07-23 22:10:50,934 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690150249414.90b01685cf87b7512396438628b63f31. 2023-07-23 22:10:50,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 19818ecd19aa7b4949319d04b52e7d9c, disabling compactions & flushes 2023-07-23 22:10:50,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:50,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:50,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. after waiting 0 ms 2023-07-23 22:10:50,935 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:50,935 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 19818ecd19aa7b4949319d04b52e7d9c 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-23 22:10:50,936 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:50,936 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:50,936 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34225,1690150247888 2023-07-23 22:10:50,936 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:50,936 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:50,936 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:50,936 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:50,938 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34225,1690150247888] 2023-07-23 22:10:50,938 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34225,1690150247888; numProcessing=1 2023-07-23 22:10:50,939 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34225,1690150247888 already deleted, retry=false 2023-07-23 22:10:50,939 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34225,1690150247888 expired; onlineServers=2 2023-07-23 22:10:50,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01/.tmp/m/30726d874d0245f99ad627892b654847 2023-07-23 22:10:50,949 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/.tmp/info/aca70a0d9f7342918a79416cdfdc61cb 2023-07-23 22:10:50,955 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aca70a0d9f7342918a79416cdfdc61cb 2023-07-23 22:10:50,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01/.tmp/m/30726d874d0245f99ad627892b654847 as hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01/m/30726d874d0245f99ad627892b654847 2023-07-23 22:10:50,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c/.tmp/info/ee8fd492150e4aae8956088d3de532f4 2023-07-23 22:10:50,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01/m/30726d874d0245f99ad627892b654847, entries=1, sequenceid=7, filesize=4.9 K 2023-07-23 22:10:50,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee8fd492150e4aae8956088d3de532f4 2023-07-23 22:10:50,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c/.tmp/info/ee8fd492150e4aae8956088d3de532f4 as hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c/info/ee8fd492150e4aae8956088d3de532f4 2023-07-23 22:10:50,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 1ff03168c527c73a7fdb4c4e5ed72b01 in 44ms, sequenceid=7, compaction requested=false 2023-07-23 22:10:50,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 22:10:50,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee8fd492150e4aae8956088d3de532f4 2023-07-23 22:10:50,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c/info/ee8fd492150e4aae8956088d3de532f4, entries=3, sequenceid=8, filesize=5.0 K 2023-07-23 22:10:50,975 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 19818ecd19aa7b4949319d04b52e7d9c in 40ms, sequenceid=8, compaction requested=false 2023-07-23 22:10:50,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 22:10:50,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/rsgroup/1ff03168c527c73a7fdb4c4e5ed72b01/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-23 22:10:50,980 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/.tmp/rep_barrier/76993c420a41408a8c45c6ac4dbb055e 2023-07-23 22:10:50,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:50,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/namespace/19818ecd19aa7b4949319d04b52e7d9c/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-23 22:10:50,981 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:50,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1ff03168c527c73a7fdb4c4e5ed72b01: 2023-07-23 22:10:50,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690150249092.1ff03168c527c73a7fdb4c4e5ed72b01. 2023-07-23 22:10:50,985 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:50,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 19818ecd19aa7b4949319d04b52e7d9c: 2023-07-23 22:10:50,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690150248944.19818ecd19aa7b4949319d04b52e7d9c. 2023-07-23 22:10:50,995 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 76993c420a41408a8c45c6ac4dbb055e 2023-07-23 22:10:51,007 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/.tmp/table/633dee294fcb4fbe916af2c1e69b0434 2023-07-23 22:10:51,012 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 633dee294fcb4fbe916af2c1e69b0434 2023-07-23 22:10:51,013 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/.tmp/info/aca70a0d9f7342918a79416cdfdc61cb as hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/info/aca70a0d9f7342918a79416cdfdc61cb 2023-07-23 22:10:51,019 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aca70a0d9f7342918a79416cdfdc61cb 2023-07-23 22:10:51,019 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/info/aca70a0d9f7342918a79416cdfdc61cb, entries=32, sequenceid=31, filesize=8.5 K 2023-07-23 22:10:51,020 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/.tmp/rep_barrier/76993c420a41408a8c45c6ac4dbb055e as hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/rep_barrier/76993c420a41408a8c45c6ac4dbb055e 2023-07-23 22:10:51,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 76993c420a41408a8c45c6ac4dbb055e 2023-07-23 22:10:51,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/rep_barrier/76993c420a41408a8c45c6ac4dbb055e, entries=1, sequenceid=31, filesize=4.9 K 2023-07-23 22:10:51,025 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/.tmp/table/633dee294fcb4fbe916af2c1e69b0434 as hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/table/633dee294fcb4fbe916af2c1e69b0434 2023-07-23 22:10:51,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 633dee294fcb4fbe916af2c1e69b0434 2023-07-23 22:10:51,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/table/633dee294fcb4fbe916af2c1e69b0434, entries=8, sequenceid=31, filesize=5.2 K 2023-07-23 22:10:51,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 108ms, sequenceid=31, compaction requested=false 2023-07-23 22:10:51,032 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 22:10:51,038 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:51,038 INFO [RS:1;jenkins-hbase4:34225] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34225,1690150247888; zookeeper connection closed. 2023-07-23 22:10:51,038 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:34225-0x101943c9f3f0002, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:51,039 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7454e6f3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7454e6f3 2023-07-23 22:10:51,040 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-23 22:10:51,041 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:51,041 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 22:10:51,041 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 22:10:51,041 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 22:10:51,119 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35669,1690150247716; all regions closed. 2023-07-23 22:10:51,119 DEBUG [RS:0;jenkins-hbase4:35669] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 22:10:51,122 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36279,1690150248063; all regions closed. 2023-07-23 22:10:51,123 DEBUG [RS:2;jenkins-hbase4:36279] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 22:10:51,133 DEBUG [RS:2;jenkins-hbase4:36279] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/oldWALs 2023-07-23 22:10:51,133 INFO [RS:2;jenkins-hbase4:36279] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36279%2C1690150248063.meta:.meta(num 1690150248864) 2023-07-23 22:10:51,133 DEBUG [RS:0;jenkins-hbase4:35669] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/oldWALs 2023-07-23 22:10:51,133 INFO [RS:0;jenkins-hbase4:35669] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35669%2C1690150247716:(num 1690150248787) 2023-07-23 22:10:51,133 DEBUG [RS:0;jenkins-hbase4:35669] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:51,133 INFO [RS:0;jenkins-hbase4:35669] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:51,135 INFO [RS:0;jenkins-hbase4:35669] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:51,135 INFO [RS:0;jenkins-hbase4:35669] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:51,135 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:51,135 INFO [RS:0;jenkins-hbase4:35669] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:51,135 INFO [RS:0;jenkins-hbase4:35669] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:51,136 INFO [RS:0;jenkins-hbase4:35669] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35669 2023-07-23 22:10:51,139 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:51,140 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35669,1690150247716] 2023-07-23 22:10:51,140 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35669,1690150247716; numProcessing=2 2023-07-23 22:10:51,140 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:51,140 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35669,1690150247716 2023-07-23 22:10:51,146 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35669,1690150247716 already deleted, retry=false 2023-07-23 22:10:51,146 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35669,1690150247716 expired; onlineServers=1 2023-07-23 22:10:51,157 DEBUG [RS:2;jenkins-hbase4:36279] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/oldWALs 2023-07-23 22:10:51,157 INFO [RS:2;jenkins-hbase4:36279] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36279%2C1690150248063:(num 1690150248782) 2023-07-23 22:10:51,157 DEBUG [RS:2;jenkins-hbase4:36279] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:51,157 INFO [RS:2;jenkins-hbase4:36279] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:51,158 INFO [RS:2;jenkins-hbase4:36279] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:51,158 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:51,159 INFO [RS:2;jenkins-hbase4:36279] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36279 2023-07-23 22:10:51,167 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:51,167 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36279,1690150248063 2023-07-23 22:10:51,167 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36279,1690150248063] 2023-07-23 22:10:51,167 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36279,1690150248063; numProcessing=3 2023-07-23 22:10:51,170 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36279,1690150248063 already deleted, retry=false 2023-07-23 22:10:51,170 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36279,1690150248063 expired; onlineServers=0 2023-07-23 22:10:51,170 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35043,1690150247517' ***** 2023-07-23 22:10:51,170 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 22:10:51,171 DEBUG [M:0;jenkins-hbase4:35043] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b2d41b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:51,171 INFO [M:0;jenkins-hbase4:35043] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:51,173 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:51,173 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:51,173 INFO [M:0;jenkins-hbase4:35043] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2dc307a3{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 22:10:51,173 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:51,174 INFO [M:0;jenkins-hbase4:35043] server.AbstractConnector(383): Stopped ServerConnector@2eb259{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:51,174 INFO [M:0;jenkins-hbase4:35043] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:51,174 INFO [M:0;jenkins-hbase4:35043] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3bc6c1b7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:51,174 INFO [M:0;jenkins-hbase4:35043] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ef61f93{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:51,175 INFO [M:0;jenkins-hbase4:35043] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35043,1690150247517 2023-07-23 22:10:51,175 INFO [M:0;jenkins-hbase4:35043] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35043,1690150247517; all regions closed. 2023-07-23 22:10:51,175 DEBUG [M:0;jenkins-hbase4:35043] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:51,175 INFO [M:0;jenkins-hbase4:35043] master.HMaster(1491): Stopping master jetty server 2023-07-23 22:10:51,178 INFO [M:0;jenkins-hbase4:35043] server.AbstractConnector(383): Stopped ServerConnector@4d9956c9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:51,178 DEBUG [M:0;jenkins-hbase4:35043] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 22:10:51,179 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 22:10:51,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150248500] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150248500,5,FailOnTimeoutGroup] 2023-07-23 22:10:51,179 DEBUG [M:0;jenkins-hbase4:35043] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 22:10:51,180 INFO [M:0;jenkins-hbase4:35043] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 22:10:51,180 INFO [M:0;jenkins-hbase4:35043] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 22:10:51,180 INFO [M:0;jenkins-hbase4:35043] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:51,180 DEBUG [M:0;jenkins-hbase4:35043] master.HMaster(1512): Stopping service threads 2023-07-23 22:10:51,180 INFO [M:0;jenkins-hbase4:35043] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 22:10:51,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150248500] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150248500,5,FailOnTimeoutGroup] 2023-07-23 22:10:51,181 ERROR [M:0;jenkins-hbase4:35043] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-23 22:10:51,182 INFO [M:0;jenkins-hbase4:35043] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 22:10:51,182 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 22:10:51,183 DEBUG [M:0;jenkins-hbase4:35043] zookeeper.ZKUtil(398): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 22:10:51,183 WARN [M:0;jenkins-hbase4:35043] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 22:10:51,183 INFO [M:0;jenkins-hbase4:35043] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 22:10:51,183 INFO [M:0;jenkins-hbase4:35043] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 22:10:51,184 DEBUG [M:0;jenkins-hbase4:35043] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 22:10:51,184 INFO [M:0;jenkins-hbase4:35043] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:51,184 DEBUG [M:0;jenkins-hbase4:35043] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:51,184 DEBUG [M:0;jenkins-hbase4:35043] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 22:10:51,184 DEBUG [M:0;jenkins-hbase4:35043] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:51,184 INFO [M:0;jenkins-hbase4:35043] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.16 KB 2023-07-23 22:10:51,201 INFO [M:0;jenkins-hbase4:35043] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1f9ad022870549a590b48f1e8b32b0cd 2023-07-23 22:10:51,207 DEBUG [M:0;jenkins-hbase4:35043] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/1f9ad022870549a590b48f1e8b32b0cd as hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1f9ad022870549a590b48f1e8b32b0cd 2023-07-23 22:10:51,212 INFO [M:0;jenkins-hbase4:35043] regionserver.HStore(1080): Added hdfs://localhost:45671/user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/1f9ad022870549a590b48f1e8b32b0cd, entries=24, sequenceid=194, filesize=12.4 K 2023-07-23 22:10:51,212 INFO [M:0;jenkins-hbase4:35043] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95228, heapSize ~109.14 KB/111760, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=194, compaction requested=false 2023-07-23 22:10:51,214 INFO [M:0;jenkins-hbase4:35043] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:51,214 DEBUG [M:0;jenkins-hbase4:35043] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 22:10:51,217 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/59223e83-1adb-4f0e-e50a-c7fa8998e54f/MasterData/WALs/jenkins-hbase4.apache.org,35043,1690150247517/jenkins-hbase4.apache.org%2C35043%2C1690150247517.1690150248380 not finished, retry = 0 2023-07-23 22:10:51,318 INFO [M:0;jenkins-hbase4:35043] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 22:10:51,318 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:51,319 INFO [M:0;jenkins-hbase4:35043] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35043 2023-07-23 22:10:51,320 DEBUG [M:0;jenkins-hbase4:35043] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,35043,1690150247517 already deleted, retry=false 2023-07-23 22:10:51,499 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:51,499 INFO [M:0;jenkins-hbase4:35043] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35043,1690150247517; zookeeper connection closed. 2023-07-23 22:10:51,499 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): master:35043-0x101943c9f3f0000, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:51,599 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:51,599 INFO [RS:2;jenkins-hbase4:36279] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36279,1690150248063; zookeeper connection closed. 2023-07-23 22:10:51,599 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:36279-0x101943c9f3f0003, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:51,599 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@683c9301] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@683c9301 2023-07-23 22:10:51,699 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:51,699 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): regionserver:35669-0x101943c9f3f0001, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:51,699 INFO [RS:0;jenkins-hbase4:35669] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35669,1690150247716; zookeeper connection closed. 2023-07-23 22:10:51,700 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4bc309f9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4bc309f9 2023-07-23 22:10:51,700 INFO [Listener at localhost/45331] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-23 22:10:51,700 WARN [Listener at localhost/45331] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 22:10:51,704 INFO [Listener at localhost/45331] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:51,809 WARN [BP-551064032-172.31.14.131-1690150246735 heartbeating to localhost/127.0.0.1:45671] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 22:10:51,809 WARN [BP-551064032-172.31.14.131-1690150246735 heartbeating to localhost/127.0.0.1:45671] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-551064032-172.31.14.131-1690150246735 (Datanode Uuid 494eeee5-a62f-43be-8fac-ee6c4c18ea90) service to localhost/127.0.0.1:45671 2023-07-23 22:10:51,810 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74/dfs/data/data5/current/BP-551064032-172.31.14.131-1690150246735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:51,810 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74/dfs/data/data6/current/BP-551064032-172.31.14.131-1690150246735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:51,812 WARN [Listener at localhost/45331] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 22:10:51,815 INFO [Listener at localhost/45331] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:51,920 WARN [BP-551064032-172.31.14.131-1690150246735 heartbeating to localhost/127.0.0.1:45671] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 22:10:51,920 WARN [BP-551064032-172.31.14.131-1690150246735 heartbeating to localhost/127.0.0.1:45671] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-551064032-172.31.14.131-1690150246735 (Datanode Uuid 2185afd8-b75f-4a0d-8426-a5f39050e1d2) service to localhost/127.0.0.1:45671 2023-07-23 22:10:51,920 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74/dfs/data/data3/current/BP-551064032-172.31.14.131-1690150246735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:51,921 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74/dfs/data/data4/current/BP-551064032-172.31.14.131-1690150246735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:51,922 WARN [Listener at localhost/45331] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 22:10:51,927 INFO [Listener at localhost/45331] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:52,029 WARN [BP-551064032-172.31.14.131-1690150246735 heartbeating to localhost/127.0.0.1:45671] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 22:10:52,029 WARN [BP-551064032-172.31.14.131-1690150246735 heartbeating to localhost/127.0.0.1:45671] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-551064032-172.31.14.131-1690150246735 (Datanode Uuid 88c93ba2-e254-4c02-995b-b2ca27337e92) service to localhost/127.0.0.1:45671 2023-07-23 22:10:52,030 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74/dfs/data/data1/current/BP-551064032-172.31.14.131-1690150246735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:52,030 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/cluster_387cf2e0-45e2-abf1-8f90-a4d859acec74/dfs/data/data2/current/BP-551064032-172.31.14.131-1690150246735] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:52,041 INFO [Listener at localhost/45331] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:52,157 INFO [Listener at localhost/45331] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 22:10:52,186 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-23 22:10:52,186 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 22:10:52,186 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.log.dir so I do NOT create it in target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136 2023-07-23 22:10:52,187 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/61047217-f98e-161f-af73-83a1e8d795c7/hadoop.tmp.dir so I do NOT create it in target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136 2023-07-23 22:10:52,187 INFO [Listener at localhost/45331] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c, deleteOnExit=true 2023-07-23 22:10:52,187 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 22:10:52,187 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/test.cache.data in system properties and HBase conf 2023-07-23 22:10:52,187 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 22:10:52,187 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir in system properties and HBase conf 2023-07-23 22:10:52,187 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 22:10:52,187 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 22:10:52,188 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 22:10:52,188 DEBUG [Listener at localhost/45331] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 22:10:52,188 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 22:10:52,188 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 22:10:52,188 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 22:10:52,188 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 22:10:52,188 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 22:10:52,188 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 22:10:52,188 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 22:10:52,189 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 22:10:52,189 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 22:10:52,189 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/nfs.dump.dir in system properties and HBase conf 2023-07-23 22:10:52,189 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir in system properties and HBase conf 2023-07-23 22:10:52,189 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 22:10:52,189 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 22:10:52,189 INFO [Listener at localhost/45331] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 22:10:52,193 WARN [Listener at localhost/45331] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 22:10:52,193 WARN [Listener at localhost/45331] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 22:10:52,235 WARN [Listener at localhost/45331] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:52,237 INFO [Listener at localhost/45331] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:52,243 INFO [Listener at localhost/45331] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir/Jetty_localhost_38711_hdfs____.wmflfo/webapp 2023-07-23 22:10:52,255 DEBUG [Listener at localhost/45331-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101943c9f3f000a, quorum=127.0.0.1:61961, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-23 22:10:52,256 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101943c9f3f000a, quorum=127.0.0.1:61961, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-23 22:10:52,336 INFO [Listener at localhost/45331] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38711 2023-07-23 22:10:52,341 WARN [Listener at localhost/45331] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 22:10:52,341 WARN [Listener at localhost/45331] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 22:10:52,381 WARN [Listener at localhost/41205] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:52,393 WARN [Listener at localhost/41205] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 22:10:52,397 WARN [Listener at localhost/41205] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:52,399 INFO [Listener at localhost/41205] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:52,406 INFO [Listener at localhost/41205] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir/Jetty_localhost_34559_datanode____.3tw39d/webapp 2023-07-23 22:10:52,503 INFO [Listener at localhost/41205] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34559 2023-07-23 22:10:52,511 WARN [Listener at localhost/36431] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:52,530 WARN [Listener at localhost/36431] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 22:10:52,532 WARN [Listener at localhost/36431] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:52,533 INFO [Listener at localhost/36431] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:52,537 INFO [Listener at localhost/36431] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir/Jetty_localhost_35847_datanode____9k2pa8/webapp 2023-07-23 22:10:52,628 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xccdfc4ed625ab87a: Processing first storage report for DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82 from datanode 1dd5e8ad-3667-4b05-b1eb-ed80e5875453 2023-07-23 22:10:52,629 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xccdfc4ed625ab87a: from storage DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82 node DatanodeRegistration(127.0.0.1:33481, datanodeUuid=1dd5e8ad-3667-4b05-b1eb-ed80e5875453, infoPort=35407, infoSecurePort=0, ipcPort=36431, storageInfo=lv=-57;cid=testClusterID;nsid=621183792;c=1690150252196), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:52,629 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xccdfc4ed625ab87a: Processing first storage report for DS-fa55c3da-ef2e-4347-beaa-83c31ca401d4 from datanode 1dd5e8ad-3667-4b05-b1eb-ed80e5875453 2023-07-23 22:10:52,629 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xccdfc4ed625ab87a: from storage DS-fa55c3da-ef2e-4347-beaa-83c31ca401d4 node DatanodeRegistration(127.0.0.1:33481, datanodeUuid=1dd5e8ad-3667-4b05-b1eb-ed80e5875453, infoPort=35407, infoSecurePort=0, ipcPort=36431, storageInfo=lv=-57;cid=testClusterID;nsid=621183792;c=1690150252196), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:52,635 INFO [Listener at localhost/36431] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35847 2023-07-23 22:10:52,641 WARN [Listener at localhost/42459] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:52,660 WARN [Listener at localhost/42459] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 22:10:52,662 WARN [Listener at localhost/42459] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 22:10:52,663 INFO [Listener at localhost/42459] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 22:10:52,668 INFO [Listener at localhost/42459] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir/Jetty_localhost_41949_datanode____9wpmiw/webapp 2023-07-23 22:10:52,749 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc5d2f8b18e20e42b: Processing first storage report for DS-1aade8ba-7df3-4d53-94c5-da7833dd328f from datanode d076bb00-a6b7-4c52-a58f-fd85294ad85d 2023-07-23 22:10:52,749 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc5d2f8b18e20e42b: from storage DS-1aade8ba-7df3-4d53-94c5-da7833dd328f node DatanodeRegistration(127.0.0.1:36601, datanodeUuid=d076bb00-a6b7-4c52-a58f-fd85294ad85d, infoPort=38569, infoSecurePort=0, ipcPort=42459, storageInfo=lv=-57;cid=testClusterID;nsid=621183792;c=1690150252196), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:52,749 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc5d2f8b18e20e42b: Processing first storage report for DS-91047fc4-854b-458c-8cda-28578da4a9c1 from datanode d076bb00-a6b7-4c52-a58f-fd85294ad85d 2023-07-23 22:10:52,749 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc5d2f8b18e20e42b: from storage DS-91047fc4-854b-458c-8cda-28578da4a9c1 node DatanodeRegistration(127.0.0.1:36601, datanodeUuid=d076bb00-a6b7-4c52-a58f-fd85294ad85d, infoPort=38569, infoSecurePort=0, ipcPort=42459, storageInfo=lv=-57;cid=testClusterID;nsid=621183792;c=1690150252196), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 22:10:52,765 INFO [Listener at localhost/42459] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41949 2023-07-23 22:10:52,774 WARN [Listener at localhost/42983] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 22:10:52,862 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x42c1e980acb3ee44: Processing first storage report for DS-f8916320-2f2d-4e1e-965c-b01582d505cb from datanode 097813cf-8d7d-4630-8800-f6f6727d195d 2023-07-23 22:10:52,862 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x42c1e980acb3ee44: from storage DS-f8916320-2f2d-4e1e-965c-b01582d505cb node DatanodeRegistration(127.0.0.1:38743, datanodeUuid=097813cf-8d7d-4630-8800-f6f6727d195d, infoPort=39993, infoSecurePort=0, ipcPort=42983, storageInfo=lv=-57;cid=testClusterID;nsid=621183792;c=1690150252196), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 22:10:52,862 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x42c1e980acb3ee44: Processing first storage report for DS-2ce9aa29-5842-4ed4-83aa-e8deac6d9070 from datanode 097813cf-8d7d-4630-8800-f6f6727d195d 2023-07-23 22:10:52,862 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x42c1e980acb3ee44: from storage DS-2ce9aa29-5842-4ed4-83aa-e8deac6d9070 node DatanodeRegistration(127.0.0.1:38743, datanodeUuid=097813cf-8d7d-4630-8800-f6f6727d195d, infoPort=39993, infoSecurePort=0, ipcPort=42983, storageInfo=lv=-57;cid=testClusterID;nsid=621183792;c=1690150252196), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 22:10:52,881 DEBUG [Listener at localhost/42983] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136 2023-07-23 22:10:52,883 INFO [Listener at localhost/42983] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/zookeeper_0, clientPort=59587, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 22:10:52,885 INFO [Listener at localhost/42983] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59587 2023-07-23 22:10:52,885 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:52,886 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:52,901 INFO [Listener at localhost/42983] util.FSUtils(471): Created version file at hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a with version=8 2023-07-23 22:10:52,901 INFO [Listener at localhost/42983] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36271/user/jenkins/test-data/53d7cbf8-fc21-1275-68b4-e4bb5b0d6ba9/hbase-staging 2023-07-23 22:10:52,902 DEBUG [Listener at localhost/42983] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 22:10:52,902 DEBUG [Listener at localhost/42983] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 22:10:52,902 DEBUG [Listener at localhost/42983] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 22:10:52,902 DEBUG [Listener at localhost/42983] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 22:10:52,903 INFO [Listener at localhost/42983] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:52,903 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:52,903 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:52,903 INFO [Listener at localhost/42983] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:52,903 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:52,903 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:52,903 INFO [Listener at localhost/42983] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:52,904 INFO [Listener at localhost/42983] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46533 2023-07-23 22:10:52,904 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:52,905 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:52,906 INFO [Listener at localhost/42983] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46533 connecting to ZooKeeper ensemble=127.0.0.1:59587 2023-07-23 22:10:52,913 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:465330x0, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:52,915 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46533-0x101943cb4560000 connected 2023-07-23 22:10:52,947 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:52,947 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:52,948 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:52,950 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46533 2023-07-23 22:10:52,951 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46533 2023-07-23 22:10:52,954 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46533 2023-07-23 22:10:52,955 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46533 2023-07-23 22:10:52,955 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46533 2023-07-23 22:10:52,957 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:52,957 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:52,957 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:52,958 INFO [Listener at localhost/42983] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 22:10:52,958 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:52,958 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:52,958 INFO [Listener at localhost/42983] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:52,959 INFO [Listener at localhost/42983] http.HttpServer(1146): Jetty bound to port 43249 2023-07-23 22:10:52,959 INFO [Listener at localhost/42983] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:52,961 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:52,961 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5514b40{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:52,962 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:52,962 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7e0de42c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:53,087 INFO [Listener at localhost/42983] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:53,089 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:53,089 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:53,090 INFO [Listener at localhost/42983] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 22:10:53,091 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,092 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@760f93bc{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir/jetty-0_0_0_0-43249-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2773986104577992930/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 22:10:53,093 INFO [Listener at localhost/42983] server.AbstractConnector(333): Started ServerConnector@779b2aaf{HTTP/1.1, (http/1.1)}{0.0.0.0:43249} 2023-07-23 22:10:53,093 INFO [Listener at localhost/42983] server.Server(415): Started @40822ms 2023-07-23 22:10:53,093 INFO [Listener at localhost/42983] master.HMaster(444): hbase.rootdir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a, hbase.cluster.distributed=false 2023-07-23 22:10:53,107 INFO [Listener at localhost/42983] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:53,108 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:53,108 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:53,108 INFO [Listener at localhost/42983] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:53,108 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:53,108 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:53,108 INFO [Listener at localhost/42983] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:53,109 INFO [Listener at localhost/42983] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39305 2023-07-23 22:10:53,109 INFO [Listener at localhost/42983] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:53,110 DEBUG [Listener at localhost/42983] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:53,111 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:53,112 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:53,113 INFO [Listener at localhost/42983] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39305 connecting to ZooKeeper ensemble=127.0.0.1:59587 2023-07-23 22:10:53,118 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:393050x0, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:53,119 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39305-0x101943cb4560001 connected 2023-07-23 22:10:53,120 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:53,120 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:53,121 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:53,122 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39305 2023-07-23 22:10:53,124 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39305 2023-07-23 22:10:53,124 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39305 2023-07-23 22:10:53,125 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39305 2023-07-23 22:10:53,126 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39305 2023-07-23 22:10:53,128 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:53,128 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:53,128 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:53,129 INFO [Listener at localhost/42983] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:53,129 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:53,129 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:53,129 INFO [Listener at localhost/42983] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:53,129 INFO [Listener at localhost/42983] http.HttpServer(1146): Jetty bound to port 40363 2023-07-23 22:10:53,130 INFO [Listener at localhost/42983] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:53,137 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,137 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2c2d406c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:53,138 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,138 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@50ec94a2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:53,251 INFO [Listener at localhost/42983] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:53,252 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:53,252 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:53,252 INFO [Listener at localhost/42983] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 22:10:53,253 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,254 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1be9377a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir/jetty-0_0_0_0-40363-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4411286779224566449/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:53,255 INFO [Listener at localhost/42983] server.AbstractConnector(333): Started ServerConnector@78d00b85{HTTP/1.1, (http/1.1)}{0.0.0.0:40363} 2023-07-23 22:10:53,255 INFO [Listener at localhost/42983] server.Server(415): Started @40984ms 2023-07-23 22:10:53,268 INFO [Listener at localhost/42983] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:53,268 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:53,269 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:53,269 INFO [Listener at localhost/42983] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:53,269 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:53,269 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:53,269 INFO [Listener at localhost/42983] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:53,270 INFO [Listener at localhost/42983] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40167 2023-07-23 22:10:53,270 INFO [Listener at localhost/42983] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:53,271 DEBUG [Listener at localhost/42983] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:53,272 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:53,273 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:53,274 INFO [Listener at localhost/42983] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40167 connecting to ZooKeeper ensemble=127.0.0.1:59587 2023-07-23 22:10:53,277 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:401670x0, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:53,279 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40167-0x101943cb4560002 connected 2023-07-23 22:10:53,279 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:53,280 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:53,280 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:53,282 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40167 2023-07-23 22:10:53,282 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40167 2023-07-23 22:10:53,283 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40167 2023-07-23 22:10:53,283 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40167 2023-07-23 22:10:53,286 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40167 2023-07-23 22:10:53,288 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:53,288 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:53,288 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:53,289 INFO [Listener at localhost/42983] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:53,289 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:53,289 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:53,289 INFO [Listener at localhost/42983] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:53,290 INFO [Listener at localhost/42983] http.HttpServer(1146): Jetty bound to port 40541 2023-07-23 22:10:53,290 INFO [Listener at localhost/42983] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:53,291 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,291 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ace3e95{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:53,291 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,292 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@77270bc7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:53,404 INFO [Listener at localhost/42983] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:53,405 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:53,405 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:53,405 INFO [Listener at localhost/42983] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 22:10:53,406 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,407 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2657b892{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir/jetty-0_0_0_0-40541-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9025902241796502051/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:53,409 INFO [Listener at localhost/42983] server.AbstractConnector(333): Started ServerConnector@748d39b9{HTTP/1.1, (http/1.1)}{0.0.0.0:40541} 2023-07-23 22:10:53,409 INFO [Listener at localhost/42983] server.Server(415): Started @41138ms 2023-07-23 22:10:53,427 INFO [Listener at localhost/42983] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:53,427 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:53,427 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:53,428 INFO [Listener at localhost/42983] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:53,428 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:53,428 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:53,428 INFO [Listener at localhost/42983] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:53,430 INFO [Listener at localhost/42983] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34751 2023-07-23 22:10:53,431 INFO [Listener at localhost/42983] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:53,434 DEBUG [Listener at localhost/42983] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:53,435 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:53,437 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:53,438 INFO [Listener at localhost/42983] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34751 connecting to ZooKeeper ensemble=127.0.0.1:59587 2023-07-23 22:10:53,442 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:347510x0, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:53,443 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:347510x0, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:53,444 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:347510x0, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:53,444 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:347510x0, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:53,451 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34751-0x101943cb4560003 connected 2023-07-23 22:10:53,451 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34751 2023-07-23 22:10:53,452 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34751 2023-07-23 22:10:53,455 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34751 2023-07-23 22:10:53,455 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34751 2023-07-23 22:10:53,455 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34751 2023-07-23 22:10:53,458 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:53,458 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:53,458 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:53,459 INFO [Listener at localhost/42983] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:53,459 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:53,459 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:53,459 INFO [Listener at localhost/42983] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:53,460 INFO [Listener at localhost/42983] http.HttpServer(1146): Jetty bound to port 41731 2023-07-23 22:10:53,460 INFO [Listener at localhost/42983] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:53,466 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,466 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7aeb7de4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:53,467 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,467 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3abd10aa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:53,611 INFO [Listener at localhost/42983] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:53,611 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:53,611 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:53,612 INFO [Listener at localhost/42983] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 22:10:53,612 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:53,613 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6c4d16d0{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir/jetty-0_0_0_0-41731-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2746272119241596883/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:53,615 INFO [Listener at localhost/42983] server.AbstractConnector(333): Started ServerConnector@78819e6b{HTTP/1.1, (http/1.1)}{0.0.0.0:41731} 2023-07-23 22:10:53,615 INFO [Listener at localhost/42983] server.Server(415): Started @41344ms 2023-07-23 22:10:53,617 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:53,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@703dda43{HTTP/1.1, (http/1.1)}{0.0.0.0:44443} 2023-07-23 22:10:53,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @41349ms 2023-07-23 22:10:53,620 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:53,621 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 22:10:53,622 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:53,623 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:53,623 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:53,623 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:53,623 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:53,625 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:53,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 22:10:53,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46533,1690150252902 from backup master directory 2023-07-23 22:10:53,627 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 22:10:53,628 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:53,628 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 22:10:53,628 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:53,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:53,643 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/hbase.id with ID: 43288b12-8217-480d-a1f0-7edd92db5b64 2023-07-23 22:10:53,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:53,656 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:53,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5161f6b0 to 127.0.0.1:59587 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:53,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e1028ff, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:53,681 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:53,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 22:10:53,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:53,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/data/master/store-tmp 2023-07-23 22:10:53,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:53,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 22:10:53,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:53,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:53,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 22:10:53,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:53,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:53,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 22:10:53,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/WALs/jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:53,696 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46533%2C1690150252902, suffix=, logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/WALs/jenkins-hbase4.apache.org,46533,1690150252902, archiveDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/oldWALs, maxLogs=10 2023-07-23 22:10:53,724 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK] 2023-07-23 22:10:53,725 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK] 2023-07-23 22:10:53,724 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK] 2023-07-23 22:10:53,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/WALs/jenkins-hbase4.apache.org,46533,1690150252902/jenkins-hbase4.apache.org%2C46533%2C1690150252902.1690150253696 2023-07-23 22:10:53,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK], DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK], DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK]] 2023-07-23 22:10:53,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:53,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:53,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:53,737 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:53,747 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:53,749 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 22:10:53,750 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 22:10:53,750 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:53,752 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:53,753 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:53,757 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 22:10:53,759 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:53,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12051158400, jitterRate=0.12235158681869507}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:53,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 22:10:53,762 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 22:10:53,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 22:10:53,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 22:10:53,764 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 22:10:53,766 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-23 22:10:53,766 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-23 22:10:53,766 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 22:10:53,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 22:10:53,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 22:10:53,782 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 22:10:53,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 22:10:53,782 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 22:10:53,788 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:53,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 22:10:53,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 22:10:53,791 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 22:10:53,792 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:53,792 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:53,793 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:53,793 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:53,793 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:53,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46533,1690150252902, sessionid=0x101943cb4560000, setting cluster-up flag (Was=false) 2023-07-23 22:10:53,800 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:53,809 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 22:10:53,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:53,814 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:53,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 22:10:53,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:53,825 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.hbase-snapshot/.tmp 2023-07-23 22:10:53,831 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 22:10:53,831 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 22:10:53,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 22:10:53,832 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:53,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 22:10:53,833 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 22:10:53,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 22:10:53,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 22:10:53,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 22:10:53,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 22:10:53,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:53,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:53,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:53,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 22:10:53,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 22:10:53,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:53,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690150283849 2023-07-23 22:10:53,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 22:10:53,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 22:10:53,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 22:10:53,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 22:10:53,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 22:10:53,850 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 22:10:53,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 22:10:53,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 22:10:53,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 22:10:53,852 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 22:10:53,852 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 22:10:53,853 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:53,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 22:10:53,854 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 22:10:53,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150253855,5,FailOnTimeoutGroup] 2023-07-23 22:10:53,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150253855,5,FailOnTimeoutGroup] 2023-07-23 22:10:53,857 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 22:10:53,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,858 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,920 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(951): ClusterId : 43288b12-8217-480d-a1f0-7edd92db5b64 2023-07-23 22:10:53,920 INFO [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(951): ClusterId : 43288b12-8217-480d-a1f0-7edd92db5b64 2023-07-23 22:10:53,920 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(951): ClusterId : 43288b12-8217-480d-a1f0-7edd92db5b64 2023-07-23 22:10:53,920 DEBUG [RS:2;jenkins-hbase4:34751] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:53,920 DEBUG [RS:0;jenkins-hbase4:39305] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:53,920 DEBUG [RS:1;jenkins-hbase4:40167] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:53,922 DEBUG [RS:0;jenkins-hbase4:39305] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:53,922 DEBUG [RS:0;jenkins-hbase4:39305] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:53,922 DEBUG [RS:1;jenkins-hbase4:40167] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:53,922 DEBUG [RS:2;jenkins-hbase4:34751] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:53,922 DEBUG [RS:2;jenkins-hbase4:34751] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:53,922 DEBUG [RS:1;jenkins-hbase4:40167] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:53,929 DEBUG [RS:0;jenkins-hbase4:39305] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:53,930 DEBUG [RS:2;jenkins-hbase4:34751] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:53,930 DEBUG [RS:0;jenkins-hbase4:39305] zookeeper.ReadOnlyZKClient(139): Connect 0x1d40ca5e to 127.0.0.1:59587 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:53,930 DEBUG [RS:1;jenkins-hbase4:40167] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:53,934 DEBUG [RS:2;jenkins-hbase4:34751] zookeeper.ReadOnlyZKClient(139): Connect 0x30243ecb to 127.0.0.1:59587 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:53,934 DEBUG [RS:1;jenkins-hbase4:40167] zookeeper.ReadOnlyZKClient(139): Connect 0x6b2197b7 to 127.0.0.1:59587 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:53,943 DEBUG [RS:0;jenkins-hbase4:39305] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13872786, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:53,943 DEBUG [RS:0;jenkins-hbase4:39305] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@17b8a12e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:53,945 DEBUG [RS:2;jenkins-hbase4:34751] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@395840b9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:53,945 DEBUG [RS:1;jenkins-hbase4:40167] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f1e0e0b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:53,945 DEBUG [RS:2;jenkins-hbase4:34751] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15e4dd6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:53,945 DEBUG [RS:1;jenkins-hbase4:40167] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a1c9d8c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:53,957 DEBUG [RS:0;jenkins-hbase4:39305] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39305 2023-07-23 22:10:53,957 INFO [RS:0;jenkins-hbase4:39305] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:53,957 INFO [RS:0;jenkins-hbase4:39305] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:53,957 DEBUG [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:53,957 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46533,1690150252902 with isa=jenkins-hbase4.apache.org/172.31.14.131:39305, startcode=1690150253107 2023-07-23 22:10:53,957 DEBUG [RS:2;jenkins-hbase4:34751] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:34751 2023-07-23 22:10:53,957 DEBUG [RS:0;jenkins-hbase4:39305] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:53,957 DEBUG [RS:1;jenkins-hbase4:40167] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40167 2023-07-23 22:10:53,958 INFO [RS:2;jenkins-hbase4:34751] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:53,958 INFO [RS:2;jenkins-hbase4:34751] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:53,958 INFO [RS:1;jenkins-hbase4:40167] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:53,958 INFO [RS:1;jenkins-hbase4:40167] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:53,958 DEBUG [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:53,958 DEBUG [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:53,958 INFO [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46533,1690150252902 with isa=jenkins-hbase4.apache.org/172.31.14.131:34751, startcode=1690150253426 2023-07-23 22:10:53,958 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46533,1690150252902 with isa=jenkins-hbase4.apache.org/172.31.14.131:40167, startcode=1690150253268 2023-07-23 22:10:53,958 DEBUG [RS:2;jenkins-hbase4:34751] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:53,958 DEBUG [RS:1;jenkins-hbase4:40167] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:53,959 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48865, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:53,961 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41663, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:53,961 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52467, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:53,961 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46533] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:53,961 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:53,962 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 22:10:53,962 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46533] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:53,962 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:53,962 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 22:10:53,962 DEBUG [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a 2023-07-23 22:10:53,962 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46533] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:53,962 DEBUG [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41205 2023-07-23 22:10:53,962 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:53,962 DEBUG [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a 2023-07-23 22:10:53,962 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 22:10:53,962 DEBUG [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41205 2023-07-23 22:10:53,962 DEBUG [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43249 2023-07-23 22:10:53,962 DEBUG [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43249 2023-07-23 22:10:53,962 DEBUG [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a 2023-07-23 22:10:53,962 DEBUG [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41205 2023-07-23 22:10:53,962 DEBUG [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43249 2023-07-23 22:10:53,967 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:53,968 DEBUG [RS:2;jenkins-hbase4:34751] zookeeper.ZKUtil(162): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:53,968 DEBUG [RS:0;jenkins-hbase4:39305] zookeeper.ZKUtil(162): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:53,968 WARN [RS:2;jenkins-hbase4:34751] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:53,968 DEBUG [RS:1;jenkins-hbase4:40167] zookeeper.ZKUtil(162): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:53,968 INFO [RS:2;jenkins-hbase4:34751] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:53,968 WARN [RS:0;jenkins-hbase4:39305] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:53,968 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39305,1690150253107] 2023-07-23 22:10:53,968 INFO [RS:0;jenkins-hbase4:39305] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:53,968 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34751,1690150253426] 2023-07-23 22:10:53,969 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40167,1690150253268] 2023-07-23 22:10:53,969 DEBUG [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:53,968 DEBUG [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:53,968 WARN [RS:1;jenkins-hbase4:40167] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:53,969 INFO [RS:1;jenkins-hbase4:40167] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:53,969 DEBUG [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:53,977 DEBUG [RS:2;jenkins-hbase4:34751] zookeeper.ZKUtil(162): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:53,977 DEBUG [RS:1;jenkins-hbase4:40167] zookeeper.ZKUtil(162): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:53,977 DEBUG [RS:0;jenkins-hbase4:39305] zookeeper.ZKUtil(162): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:53,977 DEBUG [RS:2;jenkins-hbase4:34751] zookeeper.ZKUtil(162): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:53,977 DEBUG [RS:1;jenkins-hbase4:40167] zookeeper.ZKUtil(162): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:53,977 DEBUG [RS:0;jenkins-hbase4:39305] zookeeper.ZKUtil(162): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:53,978 DEBUG [RS:2;jenkins-hbase4:34751] zookeeper.ZKUtil(162): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:53,978 DEBUG [RS:1;jenkins-hbase4:40167] zookeeper.ZKUtil(162): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:53,978 DEBUG [RS:0;jenkins-hbase4:39305] zookeeper.ZKUtil(162): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:53,979 DEBUG [RS:1;jenkins-hbase4:40167] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:53,979 DEBUG [RS:0;jenkins-hbase4:39305] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:53,979 DEBUG [RS:2;jenkins-hbase4:34751] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:53,979 INFO [RS:1;jenkins-hbase4:40167] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:53,979 INFO [RS:0;jenkins-hbase4:39305] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:53,979 INFO [RS:2;jenkins-hbase4:34751] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:53,982 INFO [RS:1;jenkins-hbase4:40167] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:53,983 INFO [RS:0;jenkins-hbase4:39305] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:53,983 INFO [RS:2;jenkins-hbase4:34751] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:53,983 INFO [RS:1;jenkins-hbase4:40167] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:53,983 INFO [RS:1;jenkins-hbase4:40167] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,984 INFO [RS:0;jenkins-hbase4:39305] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:53,984 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:53,984 INFO [RS:0;jenkins-hbase4:39305] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,985 INFO [RS:2;jenkins-hbase4:34751] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:53,985 INFO [RS:2;jenkins-hbase4:34751] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,985 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:53,986 INFO [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:53,986 INFO [RS:1;jenkins-hbase4:40167] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,987 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,987 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,987 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,987 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,988 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,988 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:53,988 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,988 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,988 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,988 DEBUG [RS:1;jenkins-hbase4:40167] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,988 INFO [RS:2;jenkins-hbase4:34751] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,990 INFO [RS:0;jenkins-hbase4:39305] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,990 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,991 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,991 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,991 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,991 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,991 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,991 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,991 INFO [RS:1;jenkins-hbase4:40167] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,991 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,991 INFO [RS:1;jenkins-hbase4:40167] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,991 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,992 INFO [RS:1;jenkins-hbase4:40167] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,992 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:53,992 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,992 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,992 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:53,992 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,992 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,992 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,992 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,992 DEBUG [RS:2;jenkins-hbase4:34751] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,993 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,993 DEBUG [RS:0;jenkins-hbase4:39305] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:53,993 INFO [RS:2;jenkins-hbase4:34751] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,994 INFO [RS:2;jenkins-hbase4:34751] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,994 INFO [RS:2;jenkins-hbase4:34751] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,999 INFO [RS:0;jenkins-hbase4:39305] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,999 INFO [RS:0;jenkins-hbase4:39305] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:53,999 INFO [RS:0;jenkins-hbase4:39305] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:54,010 INFO [RS:1;jenkins-hbase4:40167] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:54,010 INFO [RS:1;jenkins-hbase4:40167] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40167,1690150253268-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:54,015 INFO [RS:0;jenkins-hbase4:39305] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:54,015 INFO [RS:2;jenkins-hbase4:34751] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:54,015 INFO [RS:0;jenkins-hbase4:39305] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39305,1690150253107-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:54,015 INFO [RS:2;jenkins-hbase4:34751] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34751,1690150253426-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:54,029 INFO [RS:1;jenkins-hbase4:40167] regionserver.Replication(203): jenkins-hbase4.apache.org,40167,1690150253268 started 2023-07-23 22:10:54,029 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40167,1690150253268, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40167, sessionid=0x101943cb4560002 2023-07-23 22:10:54,029 DEBUG [RS:1;jenkins-hbase4:40167] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:54,029 DEBUG [RS:1;jenkins-hbase4:40167] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:54,029 DEBUG [RS:1;jenkins-hbase4:40167] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40167,1690150253268' 2023-07-23 22:10:54,029 DEBUG [RS:1;jenkins-hbase4:40167] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:54,030 DEBUG [RS:1;jenkins-hbase4:40167] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:54,030 DEBUG [RS:1;jenkins-hbase4:40167] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:54,030 DEBUG [RS:1;jenkins-hbase4:40167] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:54,030 DEBUG [RS:1;jenkins-hbase4:40167] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:54,030 DEBUG [RS:1;jenkins-hbase4:40167] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40167,1690150253268' 2023-07-23 22:10:54,030 DEBUG [RS:1;jenkins-hbase4:40167] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:54,031 DEBUG [RS:1;jenkins-hbase4:40167] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:54,031 DEBUG [RS:1;jenkins-hbase4:40167] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:54,031 INFO [RS:1;jenkins-hbase4:40167] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 22:10:54,031 INFO [RS:1;jenkins-hbase4:40167] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 22:10:54,033 INFO [RS:2;jenkins-hbase4:34751] regionserver.Replication(203): jenkins-hbase4.apache.org,34751,1690150253426 started 2023-07-23 22:10:54,033 INFO [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34751,1690150253426, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34751, sessionid=0x101943cb4560003 2023-07-23 22:10:54,033 INFO [RS:0;jenkins-hbase4:39305] regionserver.Replication(203): jenkins-hbase4.apache.org,39305,1690150253107 started 2023-07-23 22:10:54,033 DEBUG [RS:2;jenkins-hbase4:34751] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:54,034 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39305,1690150253107, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39305, sessionid=0x101943cb4560001 2023-07-23 22:10:54,034 DEBUG [RS:2;jenkins-hbase4:34751] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:54,034 DEBUG [RS:0;jenkins-hbase4:39305] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:54,034 DEBUG [RS:0;jenkins-hbase4:39305] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:54,034 DEBUG [RS:0;jenkins-hbase4:39305] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39305,1690150253107' 2023-07-23 22:10:54,034 DEBUG [RS:2;jenkins-hbase4:34751] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34751,1690150253426' 2023-07-23 22:10:54,034 DEBUG [RS:2;jenkins-hbase4:34751] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:54,034 DEBUG [RS:0;jenkins-hbase4:39305] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:54,034 DEBUG [RS:0;jenkins-hbase4:39305] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:54,034 DEBUG [RS:2;jenkins-hbase4:34751] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:54,034 DEBUG [RS:0;jenkins-hbase4:39305] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:54,035 DEBUG [RS:0;jenkins-hbase4:39305] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:54,035 DEBUG [RS:0;jenkins-hbase4:39305] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:54,035 DEBUG [RS:0;jenkins-hbase4:39305] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39305,1690150253107' 2023-07-23 22:10:54,035 DEBUG [RS:0;jenkins-hbase4:39305] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:54,035 DEBUG [RS:2;jenkins-hbase4:34751] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:54,035 DEBUG [RS:2;jenkins-hbase4:34751] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:54,035 DEBUG [RS:2;jenkins-hbase4:34751] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:54,035 DEBUG [RS:2;jenkins-hbase4:34751] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34751,1690150253426' 2023-07-23 22:10:54,035 DEBUG [RS:2;jenkins-hbase4:34751] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:54,035 DEBUG [RS:0;jenkins-hbase4:39305] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:54,035 DEBUG [RS:2;jenkins-hbase4:34751] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:54,035 DEBUG [RS:0;jenkins-hbase4:39305] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:54,035 INFO [RS:0;jenkins-hbase4:39305] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 22:10:54,035 INFO [RS:0;jenkins-hbase4:39305] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 22:10:54,035 DEBUG [RS:2;jenkins-hbase4:34751] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:54,035 INFO [RS:2;jenkins-hbase4:34751] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 22:10:54,035 INFO [RS:2;jenkins-hbase4:34751] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 22:10:54,133 INFO [RS:1;jenkins-hbase4:40167] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40167%2C1690150253268, suffix=, logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,40167,1690150253268, archiveDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs, maxLogs=32 2023-07-23 22:10:54,137 INFO [RS:0;jenkins-hbase4:39305] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39305%2C1690150253107, suffix=, logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,39305,1690150253107, archiveDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs, maxLogs=32 2023-07-23 22:10:54,137 INFO [RS:2;jenkins-hbase4:34751] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34751%2C1690150253426, suffix=, logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,34751,1690150253426, archiveDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs, maxLogs=32 2023-07-23 22:10:54,154 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK] 2023-07-23 22:10:54,154 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK] 2023-07-23 22:10:54,155 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK] 2023-07-23 22:10:54,155 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK] 2023-07-23 22:10:54,155 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK] 2023-07-23 22:10:54,156 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK] 2023-07-23 22:10:54,156 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK] 2023-07-23 22:10:54,156 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK] 2023-07-23 22:10:54,161 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK] 2023-07-23 22:10:54,162 INFO [RS:0;jenkins-hbase4:39305] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,39305,1690150253107/jenkins-hbase4.apache.org%2C39305%2C1690150253107.1690150254138 2023-07-23 22:10:54,163 INFO [RS:2;jenkins-hbase4:34751] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,34751,1690150253426/jenkins-hbase4.apache.org%2C34751%2C1690150253426.1690150254138 2023-07-23 22:10:54,167 DEBUG [RS:0;jenkins-hbase4:39305] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK], DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK], DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK]] 2023-07-23 22:10:54,167 INFO [RS:1;jenkins-hbase4:40167] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,40167,1690150253268/jenkins-hbase4.apache.org%2C40167%2C1690150253268.1690150254134 2023-07-23 22:10:54,170 DEBUG [RS:2;jenkins-hbase4:34751] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK], DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK], DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK]] 2023-07-23 22:10:54,170 DEBUG [RS:1;jenkins-hbase4:40167] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK], DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK], DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK]] 2023-07-23 22:10:54,269 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:54,270 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:54,270 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a 2023-07-23 22:10:54,284 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:54,286 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 22:10:54,287 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/info 2023-07-23 22:10:54,287 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 22:10:54,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:54,288 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 22:10:54,289 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/rep_barrier 2023-07-23 22:10:54,289 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 22:10:54,290 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:54,290 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 22:10:54,291 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/table 2023-07-23 22:10:54,291 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 22:10:54,292 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:54,292 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740 2023-07-23 22:10:54,293 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740 2023-07-23 22:10:54,294 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 22:10:54,295 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 22:10:54,297 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:54,297 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10907704000, jitterRate=0.015859097242355347}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 22:10:54,297 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 22:10:54,298 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 22:10:54,298 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 22:10:54,298 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 22:10:54,298 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 22:10:54,298 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 22:10:54,298 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 22:10:54,298 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 22:10:54,299 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 22:10:54,299 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 22:10:54,299 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 22:10:54,300 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 22:10:54,301 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 22:10:54,452 DEBUG [jenkins-hbase4:46533] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 22:10:54,452 DEBUG [jenkins-hbase4:46533] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:54,452 DEBUG [jenkins-hbase4:46533] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:54,452 DEBUG [jenkins-hbase4:46533] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:54,452 DEBUG [jenkins-hbase4:46533] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:54,452 DEBUG [jenkins-hbase4:46533] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:54,453 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40167,1690150253268, state=OPENING 2023-07-23 22:10:54,455 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 22:10:54,456 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:54,456 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40167,1690150253268}] 2023-07-23 22:10:54,456 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 22:10:54,612 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:54,612 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:54,614 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37546, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:54,619 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 22:10:54,619 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:54,621 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40167%2C1690150253268.meta, suffix=.meta, logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,40167,1690150253268, archiveDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs, maxLogs=32 2023-07-23 22:10:54,639 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK] 2023-07-23 22:10:54,639 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK] 2023-07-23 22:10:54,639 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK] 2023-07-23 22:10:54,650 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,40167,1690150253268/jenkins-hbase4.apache.org%2C40167%2C1690150253268.meta.1690150254621.meta 2023-07-23 22:10:54,650 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK], DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK], DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK]] 2023-07-23 22:10:54,650 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:54,651 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 22:10:54,651 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 22:10:54,651 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 22:10:54,651 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 22:10:54,651 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:54,651 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 22:10:54,651 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 22:10:54,655 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 22:10:54,656 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/info 2023-07-23 22:10:54,656 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/info 2023-07-23 22:10:54,657 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 22:10:54,657 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:54,658 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 22:10:54,659 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/rep_barrier 2023-07-23 22:10:54,659 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/rep_barrier 2023-07-23 22:10:54,659 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 22:10:54,659 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:54,659 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 22:10:54,660 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/table 2023-07-23 22:10:54,660 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/table 2023-07-23 22:10:54,661 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 22:10:54,661 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:54,662 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740 2023-07-23 22:10:54,663 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740 2023-07-23 22:10:54,665 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 22:10:54,666 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 22:10:54,667 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11130201280, jitterRate=0.03658077120780945}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 22:10:54,667 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 22:10:54,668 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690150254612 2023-07-23 22:10:54,673 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 22:10:54,673 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 22:10:54,675 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40167,1690150253268, state=OPEN 2023-07-23 22:10:54,677 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 22:10:54,677 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 22:10:54,679 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 22:10:54,679 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40167,1690150253268 in 221 msec 2023-07-23 22:10:54,680 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 22:10:54,680 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 380 msec 2023-07-23 22:10:54,682 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 849 msec 2023-07-23 22:10:54,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690150254682, completionTime=-1 2023-07-23 22:10:54,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 22:10:54,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 22:10:54,688 DEBUG [hconnection-0x77b93960-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:54,690 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37550, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:54,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 22:10:54,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690150314691 2023-07-23 22:10:54,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690150374691 2023-07-23 22:10:54,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 9 msec 2023-07-23 22:10:54,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46533,1690150252902-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:54,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46533,1690150252902-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:54,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46533,1690150252902-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:54,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46533, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:54,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:54,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 22:10:54,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:54,701 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 22:10:54,705 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 22:10:54,707 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:54,709 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:54,711 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/hbase/namespace/9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:54,711 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/hbase/namespace/9bc77f5629571267f0470c706b598afe empty. 2023-07-23 22:10:54,712 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/hbase/namespace/9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:54,712 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 22:10:54,731 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:54,732 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9bc77f5629571267f0470c706b598afe, NAME => 'hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp 2023-07-23 22:10:54,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:54,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 9bc77f5629571267f0470c706b598afe, disabling compactions & flushes 2023-07-23 22:10:54,746 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:54,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:54,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. after waiting 0 ms 2023-07-23 22:10:54,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:54,746 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:54,746 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 9bc77f5629571267f0470c706b598afe: 2023-07-23 22:10:54,750 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:54,751 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690150254751"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150254751"}]},"ts":"1690150254751"} 2023-07-23 22:10:54,753 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:54,754 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:54,754 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150254754"}]},"ts":"1690150254754"} 2023-07-23 22:10:54,755 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 22:10:54,759 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:54,759 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:54,759 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:54,759 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:54,759 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:54,759 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9bc77f5629571267f0470c706b598afe, ASSIGN}] 2023-07-23 22:10:54,761 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9bc77f5629571267f0470c706b598afe, ASSIGN 2023-07-23 22:10:54,762 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=9bc77f5629571267f0470c706b598afe, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39305,1690150253107; forceNewPlan=false, retain=false 2023-07-23 22:10:54,912 INFO [jenkins-hbase4:46533] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:54,913 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9bc77f5629571267f0470c706b598afe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:54,914 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690150254913"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150254913"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150254913"}]},"ts":"1690150254913"} 2023-07-23 22:10:54,915 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 9bc77f5629571267f0470c706b598afe, server=jenkins-hbase4.apache.org,39305,1690150253107}] 2023-07-23 22:10:54,950 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46533,1690150252902] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:54,953 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46533,1690150252902] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 22:10:54,959 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:54,960 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:54,961 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:54,962 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d empty. 2023-07-23 22:10:54,963 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:54,963 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 22:10:55,000 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:55,007 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 993313bd90f78641cfff3ee6df046b9d, NAME => 'hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp 2023-07-23 22:10:55,068 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:55,069 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 22:10:55,070 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58088, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 22:10:55,075 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:55,075 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9bc77f5629571267f0470c706b598afe, NAME => 'hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:55,075 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:55,075 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:55,075 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:55,075 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:55,076 INFO [StoreOpener-9bc77f5629571267f0470c706b598afe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:55,078 DEBUG [StoreOpener-9bc77f5629571267f0470c706b598afe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe/info 2023-07-23 22:10:55,078 DEBUG [StoreOpener-9bc77f5629571267f0470c706b598afe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe/info 2023-07-23 22:10:55,079 INFO [StoreOpener-9bc77f5629571267f0470c706b598afe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9bc77f5629571267f0470c706b598afe columnFamilyName info 2023-07-23 22:10:55,080 INFO [StoreOpener-9bc77f5629571267f0470c706b598afe-1] regionserver.HStore(310): Store=9bc77f5629571267f0470c706b598afe/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:55,081 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:55,081 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:55,084 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:55,088 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:55,089 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9bc77f5629571267f0470c706b598afe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10750616480, jitterRate=0.0012291818857192993}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:55,089 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9bc77f5629571267f0470c706b598afe: 2023-07-23 22:10:55,090 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe., pid=6, masterSystemTime=1690150255068 2023-07-23 22:10:55,095 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:55,095 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:55,096 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9bc77f5629571267f0470c706b598afe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:55,096 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690150255096"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150255096"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150255096"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150255096"}]},"ts":"1690150255096"} 2023-07-23 22:10:55,103 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-23 22:10:55,103 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 9bc77f5629571267f0470c706b598afe, server=jenkins-hbase4.apache.org,39305,1690150253107 in 186 msec 2023-07-23 22:10:55,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-23 22:10:55,108 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=9bc77f5629571267f0470c706b598afe, ASSIGN in 344 msec 2023-07-23 22:10:55,109 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:55,109 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150255109"}]},"ts":"1690150255109"} 2023-07-23 22:10:55,110 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 22:10:55,113 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:55,114 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 413 msec 2023-07-23 22:10:55,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 22:10:55,207 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:55,207 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:55,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:55,212 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58104, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:55,214 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 22:10:55,223 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:55,225 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-23 22:10:55,235 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 22:10:55,236 DEBUG [PEWorker-4] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-23 22:10:55,237 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 22:10:55,313 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-23 22:10:55,419 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:55,419 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 993313bd90f78641cfff3ee6df046b9d, disabling compactions & flushes 2023-07-23 22:10:55,419 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:55,419 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:55,419 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. after waiting 0 ms 2023-07-23 22:10:55,420 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:55,420 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:55,420 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 993313bd90f78641cfff3ee6df046b9d: 2023-07-23 22:10:55,422 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:55,423 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150255423"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150255423"}]},"ts":"1690150255423"} 2023-07-23 22:10:55,424 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:55,425 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:55,425 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150255425"}]},"ts":"1690150255425"} 2023-07-23 22:10:55,426 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 22:10:55,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:55,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:55,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:55,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:55,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:55,429 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=993313bd90f78641cfff3ee6df046b9d, ASSIGN}] 2023-07-23 22:10:55,430 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=993313bd90f78641cfff3ee6df046b9d, ASSIGN 2023-07-23 22:10:55,431 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=993313bd90f78641cfff3ee6df046b9d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40167,1690150253268; forceNewPlan=false, retain=false 2023-07-23 22:10:55,581 INFO [jenkins-hbase4:46533] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:55,582 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=993313bd90f78641cfff3ee6df046b9d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:55,583 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150255582"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150255582"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150255582"}]},"ts":"1690150255582"} 2023-07-23 22:10:55,584 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 993313bd90f78641cfff3ee6df046b9d, server=jenkins-hbase4.apache.org,40167,1690150253268}] 2023-07-23 22:10:55,740 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:55,740 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 993313bd90f78641cfff3ee6df046b9d, NAME => 'hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:55,740 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 22:10:55,740 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. service=MultiRowMutationService 2023-07-23 22:10:55,741 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 22:10:55,741 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:55,741 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:55,741 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:55,741 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:55,742 INFO [StoreOpener-993313bd90f78641cfff3ee6df046b9d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:55,744 DEBUG [StoreOpener-993313bd90f78641cfff3ee6df046b9d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d/m 2023-07-23 22:10:55,744 DEBUG [StoreOpener-993313bd90f78641cfff3ee6df046b9d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d/m 2023-07-23 22:10:55,744 INFO [StoreOpener-993313bd90f78641cfff3ee6df046b9d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 993313bd90f78641cfff3ee6df046b9d columnFamilyName m 2023-07-23 22:10:55,745 INFO [StoreOpener-993313bd90f78641cfff3ee6df046b9d-1] regionserver.HStore(310): Store=993313bd90f78641cfff3ee6df046b9d/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:55,745 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:55,746 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:55,749 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:55,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:55,751 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 993313bd90f78641cfff3ee6df046b9d; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5446d59f, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:55,752 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 993313bd90f78641cfff3ee6df046b9d: 2023-07-23 22:10:55,752 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d., pid=11, masterSystemTime=1690150255736 2023-07-23 22:10:55,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:55,754 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:55,754 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=993313bd90f78641cfff3ee6df046b9d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:55,754 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690150255754"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150255754"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150255754"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150255754"}]},"ts":"1690150255754"} 2023-07-23 22:10:55,757 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-23 22:10:55,757 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 993313bd90f78641cfff3ee6df046b9d, server=jenkins-hbase4.apache.org,40167,1690150253268 in 172 msec 2023-07-23 22:10:55,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=7 2023-07-23 22:10:55,759 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=7, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=993313bd90f78641cfff3ee6df046b9d, ASSIGN in 328 msec 2023-07-23 22:10:55,772 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:55,780 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 539 msec 2023-07-23 22:10:55,781 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:55,781 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150255781"}]},"ts":"1690150255781"} 2023-07-23 22:10:55,783 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 22:10:55,785 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:55,787 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 835 msec 2023-07-23 22:10:55,790 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 22:10:55,792 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 22:10:55,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.164sec 2023-07-23 22:10:55,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-23 22:10:55,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 22:10:55,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 22:10:55,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46533,1690150252902-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 22:10:55,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46533,1690150252902-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 22:10:55,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 22:10:55,822 DEBUG [Listener at localhost/42983] zookeeper.ReadOnlyZKClient(139): Connect 0x64cdcbec to 127.0.0.1:59587 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:55,828 DEBUG [Listener at localhost/42983] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ed2d6d6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:55,831 DEBUG [hconnection-0x3a2dbc4a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:55,833 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37560, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:55,834 INFO [Listener at localhost/42983] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:55,834 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:55,858 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 22:10:55,858 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 22:10:55,867 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:55,867 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:55,868 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 22:10:55,869 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 22:10:55,937 DEBUG [Listener at localhost/42983] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 22:10:55,939 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56746, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 22:10:55,944 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 22:10:55,944 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:55,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 22:10:55,945 DEBUG [Listener at localhost/42983] zookeeper.ReadOnlyZKClient(139): Connect 0x736db136 to 127.0.0.1:59587 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:55,950 DEBUG [Listener at localhost/42983] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@24f59cfd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:55,950 INFO [Listener at localhost/42983] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59587 2023-07-23 22:10:55,954 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:55,955 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101943cb456000a connected 2023-07-23 22:10:55,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:55,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:55,962 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 22:10:55,980 INFO [Listener at localhost/42983] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 22:10:55,980 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:55,980 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:55,980 INFO [Listener at localhost/42983] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 22:10:55,980 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 22:10:55,980 INFO [Listener at localhost/42983] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 22:10:55,980 INFO [Listener at localhost/42983] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 22:10:55,981 INFO [Listener at localhost/42983] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34761 2023-07-23 22:10:55,982 INFO [Listener at localhost/42983] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 22:10:55,983 DEBUG [Listener at localhost/42983] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 22:10:55,984 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:55,985 INFO [Listener at localhost/42983] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 22:10:55,986 INFO [Listener at localhost/42983] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34761 connecting to ZooKeeper ensemble=127.0.0.1:59587 2023-07-23 22:10:55,991 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:347610x0, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 22:10:55,993 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(162): regionserver:347610x0, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 22:10:55,993 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34761-0x101943cb456000b connected 2023-07-23 22:10:55,994 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(162): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 22:10:55,995 DEBUG [Listener at localhost/42983] zookeeper.ZKUtil(164): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 22:10:55,995 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34761 2023-07-23 22:10:55,996 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34761 2023-07-23 22:10:55,998 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34761 2023-07-23 22:10:56,001 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34761 2023-07-23 22:10:56,001 DEBUG [Listener at localhost/42983] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34761 2023-07-23 22:10:56,003 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 22:10:56,003 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 22:10:56,003 INFO [Listener at localhost/42983] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 22:10:56,004 INFO [Listener at localhost/42983] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 22:10:56,004 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 22:10:56,004 INFO [Listener at localhost/42983] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 22:10:56,004 INFO [Listener at localhost/42983] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 22:10:56,005 INFO [Listener at localhost/42983] http.HttpServer(1146): Jetty bound to port 35297 2023-07-23 22:10:56,005 INFO [Listener at localhost/42983] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 22:10:56,011 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:56,012 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@552ab686{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,AVAILABLE} 2023-07-23 22:10:56,012 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:56,012 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1f5cadfb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 22:10:56,125 INFO [Listener at localhost/42983] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 22:10:56,126 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 22:10:56,126 INFO [Listener at localhost/42983] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 22:10:56,126 INFO [Listener at localhost/42983] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 22:10:56,127 INFO [Listener at localhost/42983] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 22:10:56,127 INFO [Listener at localhost/42983] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@41a216b5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/java.io.tmpdir/jetty-0_0_0_0-35297-hbase-server-2_4_18-SNAPSHOT_jar-_-any-572630622477548125/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:56,129 INFO [Listener at localhost/42983] server.AbstractConnector(333): Started ServerConnector@3370d5e7{HTTP/1.1, (http/1.1)}{0.0.0.0:35297} 2023-07-23 22:10:56,129 INFO [Listener at localhost/42983] server.Server(415): Started @43858ms 2023-07-23 22:10:56,132 INFO [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(951): ClusterId : 43288b12-8217-480d-a1f0-7edd92db5b64 2023-07-23 22:10:56,132 DEBUG [RS:3;jenkins-hbase4:34761] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 22:10:56,134 DEBUG [RS:3;jenkins-hbase4:34761] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 22:10:56,134 DEBUG [RS:3;jenkins-hbase4:34761] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 22:10:56,136 DEBUG [RS:3;jenkins-hbase4:34761] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 22:10:56,137 DEBUG [RS:3;jenkins-hbase4:34761] zookeeper.ReadOnlyZKClient(139): Connect 0x25c91e9a to 127.0.0.1:59587 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 22:10:56,141 DEBUG [RS:3;jenkins-hbase4:34761] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b0ee453, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 22:10:56,141 DEBUG [RS:3;jenkins-hbase4:34761] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f3175ea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:56,150 DEBUG [RS:3;jenkins-hbase4:34761] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:34761 2023-07-23 22:10:56,150 INFO [RS:3;jenkins-hbase4:34761] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 22:10:56,150 INFO [RS:3;jenkins-hbase4:34761] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 22:10:56,150 DEBUG [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 22:10:56,150 INFO [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46533,1690150252902 with isa=jenkins-hbase4.apache.org/172.31.14.131:34761, startcode=1690150255979 2023-07-23 22:10:56,150 DEBUG [RS:3;jenkins-hbase4:34761] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 22:10:56,153 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52689, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 22:10:56,153 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46533] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:56,153 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 22:10:56,153 DEBUG [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a 2023-07-23 22:10:56,153 DEBUG [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41205 2023-07-23 22:10:56,153 DEBUG [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43249 2023-07-23 22:10:56,159 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:56,159 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:56,160 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:56,159 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:56,159 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:56,160 DEBUG [RS:3;jenkins-hbase4:34761] zookeeper.ZKUtil(162): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:56,160 WARN [RS:3;jenkins-hbase4:34761] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 22:10:56,160 INFO [RS:3;jenkins-hbase4:34761] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 22:10:56,160 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 22:10:56,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:56,160 DEBUG [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:56,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:56,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:56,160 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34761,1690150255979] 2023-07-23 22:10:56,164 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 22:10:56,164 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:56,164 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:56,164 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:56,165 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:56,165 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:56,166 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:56,166 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:56,166 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:56,166 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:56,167 DEBUG [RS:3;jenkins-hbase4:34761] zookeeper.ZKUtil(162): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:56,167 DEBUG [RS:3;jenkins-hbase4:34761] zookeeper.ZKUtil(162): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:56,167 DEBUG [RS:3;jenkins-hbase4:34761] zookeeper.ZKUtil(162): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:56,167 DEBUG [RS:3;jenkins-hbase4:34761] zookeeper.ZKUtil(162): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:56,168 DEBUG [RS:3;jenkins-hbase4:34761] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 22:10:56,168 INFO [RS:3;jenkins-hbase4:34761] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 22:10:56,169 INFO [RS:3;jenkins-hbase4:34761] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 22:10:56,174 INFO [RS:3;jenkins-hbase4:34761] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 22:10:56,174 INFO [RS:3;jenkins-hbase4:34761] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:56,174 INFO [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 22:10:56,175 INFO [RS:3;jenkins-hbase4:34761] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:56,176 DEBUG [RS:3;jenkins-hbase4:34761] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 22:10:56,179 INFO [RS:3;jenkins-hbase4:34761] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:56,179 INFO [RS:3;jenkins-hbase4:34761] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:56,179 INFO [RS:3;jenkins-hbase4:34761] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:56,190 INFO [RS:3;jenkins-hbase4:34761] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 22:10:56,190 INFO [RS:3;jenkins-hbase4:34761] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34761,1690150255979-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 22:10:56,202 INFO [RS:3;jenkins-hbase4:34761] regionserver.Replication(203): jenkins-hbase4.apache.org,34761,1690150255979 started 2023-07-23 22:10:56,202 INFO [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34761,1690150255979, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34761, sessionid=0x101943cb456000b 2023-07-23 22:10:56,202 DEBUG [RS:3;jenkins-hbase4:34761] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 22:10:56,202 DEBUG [RS:3;jenkins-hbase4:34761] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:56,202 DEBUG [RS:3;jenkins-hbase4:34761] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34761,1690150255979' 2023-07-23 22:10:56,202 DEBUG [RS:3;jenkins-hbase4:34761] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 22:10:56,203 DEBUG [RS:3;jenkins-hbase4:34761] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 22:10:56,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:56,203 DEBUG [RS:3;jenkins-hbase4:34761] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 22:10:56,203 DEBUG [RS:3;jenkins-hbase4:34761] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 22:10:56,203 DEBUG [RS:3;jenkins-hbase4:34761] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:56,203 DEBUG [RS:3;jenkins-hbase4:34761] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34761,1690150255979' 2023-07-23 22:10:56,203 DEBUG [RS:3;jenkins-hbase4:34761] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 22:10:56,203 DEBUG [RS:3;jenkins-hbase4:34761] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 22:10:56,204 DEBUG [RS:3;jenkins-hbase4:34761] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 22:10:56,204 INFO [RS:3;jenkins-hbase4:34761] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 22:10:56,204 INFO [RS:3;jenkins-hbase4:34761] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 22:10:56,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:56,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:56,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:56,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:56,210 DEBUG [hconnection-0x390d766a-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 22:10:56,211 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37562, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 22:10:56,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:56,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:56,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46533] to rsgroup master 2023-07-23 22:10:56,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:56,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:56746 deadline: 1690151456219, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. 2023-07-23 22:10:56,219 WARN [Listener at localhost/42983] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:56,220 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:56,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:56,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:56,221 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34751, jenkins-hbase4.apache.org:34761, jenkins-hbase4.apache.org:39305, jenkins-hbase4.apache.org:40167], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:56,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:56,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:56,269 INFO [Listener at localhost/42983] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=556 (was 502) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data5/current/BP-345229045-172.31.14.131-1690150252196 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x390d766a-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@1b11dd1a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789395755-2532 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-53881145-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:39305-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77b93960-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741831_1007, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789395755-2534 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x64cdcbec-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x3a2dbc4a-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789395755-2530 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp645749070-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741831_1007, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789395755-2531 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x25c91e9a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a-prefix:jenkins-hbase4.apache.org,34751,1690150253426 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61961@0x0169a71c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/498334846.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 452393621@qtp-322968384-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38711 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp90824746-2192 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 41205 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 1 on default port 41205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/42983-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 474283062@qtp-322968384-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 4 on default port 41205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:45671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Parameter Sending Thread #3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp645749070-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp90824746-2194 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42983-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 41205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:39305Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77b93960-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 42459 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-2b7f49a2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp645749070-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:34751-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:45671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x30243ecb sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/498334846.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x30243ecb-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:34761 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-743454278_17 at /127.0.0.1:46756 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:45671 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: jenkins-hbase4:40167Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77b93960-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1489138714) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: IPC Server handler 3 on default port 36431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-11fafdd0-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:40167 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42983 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150253855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp90824746-2191 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x64cdcbec-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:41205 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42983-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x6b2197b7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/498334846.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 42983 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x736db136-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:45671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45331-SendThread(127.0.0.1:61961) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@49f49fc6 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42983-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1416500281-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1123170503-2159 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@7961ba67[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1770726730_17 at /127.0.0.1:46780 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741831_1007] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x6b2197b7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_76137069_17 at /127.0.0.1:50072 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x25c91e9a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/498334846.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,35043,1690150247517 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741831_1007, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:45671 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x5161f6b0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/498334846.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data6/current/BP-345229045-172.31.14.131-1690150252196 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789395755-2533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1222144388@qtp-1756024406-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34559 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS:2;jenkins-hbase4:34751 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x1d40ca5e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42983-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 4 on default port 42459 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:34751Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_76137069_17 at /127.0.0.1:50140 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2130324460-2263-acceptor-0@663089b6-ServerConnector@703dda43{HTTP/1.1, (http/1.1)}{0.0.0.0:44443} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_76137069_17 at /127.0.0.1:51012 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42983-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data3/current/BP-345229045-172.31.14.131-1690150252196 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4bb2d0db[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x5161f6b0-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 36431 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a-prefix:jenkins-hbase4.apache.org,39305,1690150253107 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1123170503-2158-acceptor-0@4d2d329b-ServerConnector@779b2aaf{HTTP/1.1, (http/1.1)}{0.0.0.0:43249} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 42983 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@14f4699c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:46533 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42983-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 42459 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@54750611 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1770726730_17 at /127.0.0.1:46866 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data4/current/BP-345229045-172.31.14.131-1690150252196 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2130324460-2264 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150253855 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789395755-2528 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x736db136-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins@localhost:41205 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_76137069_17 at /127.0.0.1:46800 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_544066919_17 at /127.0.0.1:50134 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data2/current/BP-345229045-172.31.14.131-1690150252196 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:45671 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/42983-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:45671 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: qtp1416500281-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-345229045-172.31.14.131-1690150252196 heartbeating to localhost/127.0.0.1:41205 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:41205 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1141396998@qtp-1756024406-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp2130324460-2262 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_544066919_17 at /127.0.0.1:46794 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2130324460-2261 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x6b2197b7-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1123170503-2164 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77b93960-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:45671 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:41205 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-743454278_17 at /127.0.0.1:50096 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp645749070-2219-acceptor-0@2b4f4c37-ServerConnector@748d39b9{HTTP/1.1, (http/1.1)}{0.0.0.0:40541} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp645749070-2220 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42983.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp2130324460-2259 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1123170503-2163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:41205 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp90824746-2193 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp645749070-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp90824746-2195 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@343c1c61 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 42459 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/42983.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5b85bed1 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x64cdcbec sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/498334846.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2130324460-2260 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 41205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:34761Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_76137069_17 at /127.0.0.1:50152 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 42983 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2130324460-2266 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x30243ecb-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ProcessThread(sid:0 cport:59587): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: 1609337155@qtp-897666868-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35847 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS:3;jenkins-hbase4:34761-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp90824746-2190 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@5254a789 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789395755-2529-acceptor-0@410836a2-ServerConnector@3370d5e7{HTTP/1.1, (http/1.1)}{0.0.0.0:35297} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_76137069_17 at /127.0.0.1:51084 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData-prefix:jenkins-hbase4.apache.org,46533,1690150252902 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:41205 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x1d40ca5e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/498334846.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6074c336 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@5e1ad929 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:40167-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42983-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: BP-345229045-172.31.14.131-1690150252196 heartbeating to localhost/127.0.0.1:41205 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77b93960-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-555-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-16abae5-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 42459 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@11d99f6e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77b93960-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1416500281-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61961@0x0169a71c-SendThread(127.0.0.1:61961) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:228) org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) Potentially hanging thread: IPC Server handler 1 on default port 36431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@576b5e7e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1770726730_17 at /127.0.0.1:51068 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741831_1007] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a-prefix:jenkins-hbase4.apache.org,40167,1690150253268 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42983-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/42983.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server handler 2 on default port 42983 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x1d40ca5e-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-743454278_17 at /127.0.0.1:51036 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 42983 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x25c91e9a-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:41205 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6ee861ba-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp789395755-2535 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x77b93960-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1123170503-2160 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7e8e97c sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:41205 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34761 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1026920033@qtp-897666868-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp645749070-2218 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp90824746-2188 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/42983-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 42983 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1416500281-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1416500281-2248 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1123170503-2161 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 42459 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1416500281-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x390d766a-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1416500281-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_76137069_17 at /127.0.0.1:51092 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-345229045-172.31.14.131-1690150252196 heartbeating to localhost/127.0.0.1:41205 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40167 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:59587 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:41205 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp90824746-2189-acceptor-0@43b6a57-ServerConnector@78d00b85{HTTP/1.1, (http/1.1)}{0.0.0.0:40363} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_76137069_17 at /127.0.0.1:46816 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a-prefix:jenkins-hbase4.apache.org,40167,1690150253268.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42983.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1770726730_17 at /127.0.0.1:50118 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741831_1007] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x736db136 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/498334846.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:41205 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:61961@0x0169a71c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp2130324460-2265 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1416500281-2249-acceptor-0@509fad42-ServerConnector@78819e6b{HTTP/1.1, (http/1.1)}{0.0.0.0:41731} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-535-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-345229045-172.31.14.131-1690150252196:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:39305 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4342de5b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59587@0x5161f6b0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp645749070-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data1/current/BP-345229045-172.31.14.131-1690150252196 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42983-SendThread(127.0.0.1:59587) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7d808172 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45331-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1123170503-2157 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1095836359.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_544066919_17 at /127.0.0.1:51060 [Receiving block BP-345229045-172.31.14.131-1690150252196:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1457054538) connection to localhost/127.0.0.1:45671 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x77b93960-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46533 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-560-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-537-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46533,1690150252902 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: 655275252@qtp-2061304167-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp1123170503-2162 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 438803929@qtp-2061304167-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41949 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) - Thread LEAK? -, OpenFileDescriptor=825 (was 776) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=385 (was 448), ProcessCount=174 (was 176), AvailableMemoryMB=7694 (was 5777) - AvailableMemoryMB LEAK? - 2023-07-23 22:10:56,272 WARN [Listener at localhost/42983] hbase.ResourceChecker(130): Thread=556 is superior to 500 2023-07-23 22:10:56,288 INFO [Listener at localhost/42983] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=556, OpenFileDescriptor=825, MaxFileDescriptor=60000, SystemLoadAverage=385, ProcessCount=174, AvailableMemoryMB=7692 2023-07-23 22:10:56,288 WARN [Listener at localhost/42983] hbase.ResourceChecker(130): Thread=556 is superior to 500 2023-07-23 22:10:56,289 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-23 22:10:56,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:56,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:56,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:56,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:56,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:56,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:56,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:56,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:56,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:56,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:56,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:56,302 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:56,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:56,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:56,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:56,306 INFO [RS:3;jenkins-hbase4:34761] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34761%2C1690150255979, suffix=, logDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,34761,1690150255979, archiveDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs, maxLogs=32 2023-07-23 22:10:56,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:56,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:56,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:56,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:56,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46533] to rsgroup master 2023-07-23 22:10:56,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:56,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:56746 deadline: 1690151456311, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. 2023-07-23 22:10:56,312 WARN [Listener at localhost/42983] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:56,314 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:56,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:56,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:56,315 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34751, jenkins-hbase4.apache.org:34761, jenkins-hbase4.apache.org:39305, jenkins-hbase4.apache.org:40167], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:56,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:56,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:56,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:56,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-23 22:10:56,320 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:56,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-23 22:10:56,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 22:10:56,322 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:56,322 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:56,323 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:56,327 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK] 2023-07-23 22:10:56,327 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK] 2023-07-23 22:10:56,328 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK] 2023-07-23 22:10:56,328 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 22:10:56,330 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/default/t1/f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:56,330 INFO [RS:3;jenkins-hbase4:34761] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/WALs/jenkins-hbase4.apache.org,34761,1690150255979/jenkins-hbase4.apache.org%2C34761%2C1690150255979.1690150256306 2023-07-23 22:10:56,330 DEBUG [RS:3;jenkins-hbase4:34761] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38743,DS-f8916320-2f2d-4e1e-965c-b01582d505cb,DISK], DatanodeInfoWithStorage[127.0.0.1:33481,DS-d665cc9d-4cb3-442a-a185-3a555cf1ef82,DISK], DatanodeInfoWithStorage[127.0.0.1:36601,DS-1aade8ba-7df3-4d53-94c5-da7833dd328f,DISK]] 2023-07-23 22:10:56,330 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/default/t1/f620f9ca275844ba46ccb0e75255c245 empty. 2023-07-23 22:10:56,331 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/default/t1/f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:56,331 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-23 22:10:56,344 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-23 22:10:56,345 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => f620f9ca275844ba46ccb0e75255c245, NAME => 't1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp 2023-07-23 22:10:56,357 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:56,357 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing f620f9ca275844ba46ccb0e75255c245, disabling compactions & flushes 2023-07-23 22:10:56,357 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:56,357 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:56,357 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. after waiting 0 ms 2023-07-23 22:10:56,357 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:56,357 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:56,357 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for f620f9ca275844ba46ccb0e75255c245: 2023-07-23 22:10:56,359 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 22:10:56,360 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150256360"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150256360"}]},"ts":"1690150256360"} 2023-07-23 22:10:56,362 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 22:10:56,362 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 22:10:56,363 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150256362"}]},"ts":"1690150256362"} 2023-07-23 22:10:56,363 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-23 22:10:56,366 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 22:10:56,366 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 22:10:56,366 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 22:10:56,366 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 22:10:56,366 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 22:10:56,366 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 22:10:56,367 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=f620f9ca275844ba46ccb0e75255c245, ASSIGN}] 2023-07-23 22:10:56,367 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=f620f9ca275844ba46ccb0e75255c245, ASSIGN 2023-07-23 22:10:56,368 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=f620f9ca275844ba46ccb0e75255c245, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39305,1690150253107; forceNewPlan=false, retain=false 2023-07-23 22:10:56,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 22:10:56,518 INFO [jenkins-hbase4:46533] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 22:10:56,520 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f620f9ca275844ba46ccb0e75255c245, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:56,520 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150256520"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150256520"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150256520"}]},"ts":"1690150256520"} 2023-07-23 22:10:56,522 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure f620f9ca275844ba46ccb0e75255c245, server=jenkins-hbase4.apache.org,39305,1690150253107}] 2023-07-23 22:10:56,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 22:10:56,677 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:56,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f620f9ca275844ba46ccb0e75255c245, NAME => 't1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.', STARTKEY => '', ENDKEY => ''} 2023-07-23 22:10:56,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:56,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 22:10:56,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:56,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:56,678 INFO [StoreOpener-f620f9ca275844ba46ccb0e75255c245-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:56,679 DEBUG [StoreOpener-f620f9ca275844ba46ccb0e75255c245-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/default/t1/f620f9ca275844ba46ccb0e75255c245/cf1 2023-07-23 22:10:56,680 DEBUG [StoreOpener-f620f9ca275844ba46ccb0e75255c245-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/default/t1/f620f9ca275844ba46ccb0e75255c245/cf1 2023-07-23 22:10:56,680 INFO [StoreOpener-f620f9ca275844ba46ccb0e75255c245-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f620f9ca275844ba46ccb0e75255c245 columnFamilyName cf1 2023-07-23 22:10:56,680 INFO [StoreOpener-f620f9ca275844ba46ccb0e75255c245-1] regionserver.HStore(310): Store=f620f9ca275844ba46ccb0e75255c245/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 22:10:56,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/default/t1/f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:56,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/default/t1/f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:56,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:56,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/default/t1/f620f9ca275844ba46ccb0e75255c245/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 22:10:56,685 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f620f9ca275844ba46ccb0e75255c245; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11100740320, jitterRate=0.033837005496025085}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 22:10:56,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f620f9ca275844ba46ccb0e75255c245: 2023-07-23 22:10:56,686 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245., pid=14, masterSystemTime=1690150256673 2023-07-23 22:10:56,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:56,687 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:56,688 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f620f9ca275844ba46ccb0e75255c245, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:56,688 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150256687"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690150256687"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690150256687"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690150256687"}]},"ts":"1690150256687"} 2023-07-23 22:10:56,690 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-23 22:10:56,690 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure f620f9ca275844ba46ccb0e75255c245, server=jenkins-hbase4.apache.org,39305,1690150253107 in 168 msec 2023-07-23 22:10:56,692 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-23 22:10:56,692 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=f620f9ca275844ba46ccb0e75255c245, ASSIGN in 323 msec 2023-07-23 22:10:56,692 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 22:10:56,692 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150256692"}]},"ts":"1690150256692"} 2023-07-23 22:10:56,693 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-23 22:10:56,695 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 22:10:56,696 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 377 msec 2023-07-23 22:10:56,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 22:10:56,925 INFO [Listener at localhost/42983] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-23 22:10:56,926 DEBUG [Listener at localhost/42983] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-23 22:10:56,926 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:56,928 INFO [Listener at localhost/42983] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-23 22:10:56,928 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:56,928 INFO [Listener at localhost/42983] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-23 22:10:56,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 22:10:56,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-23 22:10:56,933 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 22:10:56,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-23 22:10:56,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:56746 deadline: 1690150316929, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-23 22:10:56,935 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:56,936 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-23 22:10:57,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:57,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:57,037 INFO [Listener at localhost/42983] client.HBaseAdmin$15(890): Started disable of t1 2023-07-23 22:10:57,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-23 22:10:57,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-23 22:10:57,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 22:10:57,041 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150257041"}]},"ts":"1690150257041"} 2023-07-23 22:10:57,042 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-23 22:10:57,044 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-23 22:10:57,044 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=f620f9ca275844ba46ccb0e75255c245, UNASSIGN}] 2023-07-23 22:10:57,045 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=f620f9ca275844ba46ccb0e75255c245, UNASSIGN 2023-07-23 22:10:57,045 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=f620f9ca275844ba46ccb0e75255c245, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:57,045 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150257045"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690150257045"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690150257045"}]},"ts":"1690150257045"} 2023-07-23 22:10:57,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure f620f9ca275844ba46ccb0e75255c245, server=jenkins-hbase4.apache.org,39305,1690150253107}] 2023-07-23 22:10:57,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 22:10:57,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:57,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f620f9ca275844ba46ccb0e75255c245, disabling compactions & flushes 2023-07-23 22:10:57,201 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:57,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:57,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. after waiting 0 ms 2023-07-23 22:10:57,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:57,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/default/t1/f620f9ca275844ba46ccb0e75255c245/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 22:10:57,205 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245. 2023-07-23 22:10:57,205 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f620f9ca275844ba46ccb0e75255c245: 2023-07-23 22:10:57,206 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:57,207 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=f620f9ca275844ba46ccb0e75255c245, regionState=CLOSED 2023-07-23 22:10:57,207 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690150257207"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690150257207"}]},"ts":"1690150257207"} 2023-07-23 22:10:57,210 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-23 22:10:57,210 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure f620f9ca275844ba46ccb0e75255c245, server=jenkins-hbase4.apache.org,39305,1690150253107 in 162 msec 2023-07-23 22:10:57,212 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-23 22:10:57,212 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=f620f9ca275844ba46ccb0e75255c245, UNASSIGN in 166 msec 2023-07-23 22:10:57,212 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690150257212"}]},"ts":"1690150257212"} 2023-07-23 22:10:57,214 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-23 22:10:57,215 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-23 22:10:57,217 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 179 msec 2023-07-23 22:10:57,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 22:10:57,342 INFO [Listener at localhost/42983] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-23 22:10:57,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-23 22:10:57,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-23 22:10:57,346 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-23 22:10:57,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-23 22:10:57,346 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-23 22:10:57,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:57,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:57,350 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/default/t1/f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:57,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 22:10:57,351 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/default/t1/f620f9ca275844ba46ccb0e75255c245/cf1, FileablePath, hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/default/t1/f620f9ca275844ba46ccb0e75255c245/recovered.edits] 2023-07-23 22:10:57,356 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/default/t1/f620f9ca275844ba46ccb0e75255c245/recovered.edits/4.seqid to hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/archive/data/default/t1/f620f9ca275844ba46ccb0e75255c245/recovered.edits/4.seqid 2023-07-23 22:10:57,356 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/.tmp/data/default/t1/f620f9ca275844ba46ccb0e75255c245 2023-07-23 22:10:57,356 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-23 22:10:57,359 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-23 22:10:57,360 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-23 22:10:57,362 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-23 22:10:57,363 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-23 22:10:57,363 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-23 22:10:57,363 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690150257363"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:57,364 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 22:10:57,364 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f620f9ca275844ba46ccb0e75255c245, NAME => 't1,,1690150256317.f620f9ca275844ba46ccb0e75255c245.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 22:10:57,364 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-23 22:10:57,364 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690150257364"}]},"ts":"9223372036854775807"} 2023-07-23 22:10:57,365 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-23 22:10:57,368 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-23 22:10:57,369 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-23 22:10:57,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 22:10:57,451 INFO [Listener at localhost/42983] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-23 22:10:57,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:57,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:57,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:57,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:57,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:57,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:57,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:57,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:57,467 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:57,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:57,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:57,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:57,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:57,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46533] to rsgroup master 2023-07-23 22:10:57,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:57,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:56746 deadline: 1690151457475, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. 2023-07-23 22:10:57,476 WARN [Listener at localhost/42983] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:57,479 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:57,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,480 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34751, jenkins-hbase4.apache.org:34761, jenkins-hbase4.apache.org:39305, jenkins-hbase4.apache.org:40167], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:57,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:57,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:57,498 INFO [Listener at localhost/42983] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=570 (was 556) - Thread LEAK? -, OpenFileDescriptor=833 (was 825) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=385 (was 385), ProcessCount=174 (was 174), AvailableMemoryMB=7687 (was 7692) 2023-07-23 22:10:57,498 WARN [Listener at localhost/42983] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-23 22:10:57,515 INFO [Listener at localhost/42983] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=570, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=385, ProcessCount=174, AvailableMemoryMB=7686 2023-07-23 22:10:57,515 WARN [Listener at localhost/42983] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-23 22:10:57,515 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-23 22:10:57,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:57,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:57,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:57,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:57,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:57,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:57,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:57,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:57,529 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:57,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:57,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:57,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:57,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:57,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46533] to rsgroup master 2023-07-23 22:10:57,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:57,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56746 deadline: 1690151457539, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. 2023-07-23 22:10:57,540 WARN [Listener at localhost/42983] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:57,541 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:57,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,542 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34751, jenkins-hbase4.apache.org:34761, jenkins-hbase4.apache.org:39305, jenkins-hbase4.apache.org:40167], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:57,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:57,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:57,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-23 22:10:57,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:57,545 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-23 22:10:57,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-23 22:10:57,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 22:10:57,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:57,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:57,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:57,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:57,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:57,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:57,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:57,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:57,564 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:57,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:57,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:57,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:57,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:57,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46533] to rsgroup master 2023-07-23 22:10:57,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:57,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56746 deadline: 1690151457575, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. 2023-07-23 22:10:57,576 WARN [Listener at localhost/42983] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:57,578 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:57,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,579 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34751, jenkins-hbase4.apache.org:34761, jenkins-hbase4.apache.org:39305, jenkins-hbase4.apache.org:40167], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:57,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:57,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:57,603 INFO [Listener at localhost/42983] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572 (was 570) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=385 (was 385), ProcessCount=174 (was 174), AvailableMemoryMB=7685 (was 7686) 2023-07-23 22:10:57,603 WARN [Listener at localhost/42983] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-23 22:10:57,622 INFO [Listener at localhost/42983] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=385, ProcessCount=174, AvailableMemoryMB=7685 2023-07-23 22:10:57,622 WARN [Listener at localhost/42983] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-23 22:10:57,622 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-23 22:10:57,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:57,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:57,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:57,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:57,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:57,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:57,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:57,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:57,637 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:57,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:57,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:57,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:57,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:57,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46533] to rsgroup master 2023-07-23 22:10:57,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:57,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56746 deadline: 1690151457648, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. 2023-07-23 22:10:57,649 WARN [Listener at localhost/42983] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:57,651 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:57,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,652 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34751, jenkins-hbase4.apache.org:34761, jenkins-hbase4.apache.org:39305, jenkins-hbase4.apache.org:40167], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:57,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:57,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:57,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:57,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:57,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:57,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:57,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:57,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:57,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:57,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:57,667 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:57,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:57,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:57,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:57,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:57,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46533] to rsgroup master 2023-07-23 22:10:57,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:57,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56746 deadline: 1690151457676, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. 2023-07-23 22:10:57,677 WARN [Listener at localhost/42983] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:57,678 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:57,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,679 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34751, jenkins-hbase4.apache.org:34761, jenkins-hbase4.apache.org:39305, jenkins-hbase4.apache.org:40167], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:57,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:57,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:57,699 INFO [Listener at localhost/42983] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573 (was 572) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=385 (was 385), ProcessCount=174 (was 174), AvailableMemoryMB=7683 (was 7685) 2023-07-23 22:10:57,699 WARN [Listener at localhost/42983] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-23 22:10:57,723 INFO [Listener at localhost/42983] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=385, ProcessCount=174, AvailableMemoryMB=7682 2023-07-23 22:10:57,723 WARN [Listener at localhost/42983] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-23 22:10:57,723 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-23 22:10:57,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:57,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:57,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:57,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:57,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:57,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:57,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:57,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:57,737 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:57,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:57,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:57,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:57,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:57,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46533] to rsgroup master 2023-07-23 22:10:57,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:57,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56746 deadline: 1690151457746, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. 2023-07-23 22:10:57,746 WARN [Listener at localhost/42983] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:57,748 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:57,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,749 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34751, jenkins-hbase4.apache.org:34761, jenkins-hbase4.apache.org:39305, jenkins-hbase4.apache.org:40167], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:57,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:57,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:57,750 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-23 22:10:57,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-23 22:10:57,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-23 22:10:57,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:57,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 22:10:57,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:57,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:57,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:57,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-23 22:10:57,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-23 22:10:57,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 22:10:57,768 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:57,770 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-23 22:10:57,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 22:10:57,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-23 22:10:57,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:57,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:56746 deadline: 1690151457869, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-23 22:10:57,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-23 22:10:57,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-23 22:10:57,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-23 22:10:57,893 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-23 22:10:57,895 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-23 22:10:57,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-23 22:10:57,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-23 22:10:57,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-23 22:10:57,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:57,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-23 22:10:57,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:57,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 22:10:58,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:58,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:58,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:58,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-23 22:10:58,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 22:10:58,011 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 22:10:58,013 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 22:10:58,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-23 22:10:58,014 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 22:10:58,015 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-23 22:10:58,015 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 22:10:58,016 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 22:10:58,018 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 22:10:58,019 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 11 msec 2023-07-23 22:10:58,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-23 22:10:58,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-23 22:10:58,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-23 22:10:58,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:58,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:58,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-23 22:10:58,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:58,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:58,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:58,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:58,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:56746 deadline: 1690150318128, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-23 22:10:58,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:58,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:58,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:58,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:58,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:58,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:58,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:58,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-23 22:10:58,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:58,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:58,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 22:10:58,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:58,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 22:10:58,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 22:10:58,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 22:10:58,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 22:10:58,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 22:10:58,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 22:10:58,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:58,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 22:10:58,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 22:10:58,152 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 22:10:58,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 22:10:58,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 22:10:58,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 22:10:58,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 22:10:58,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 22:10:58,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:58,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:58,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46533] to rsgroup master 2023-07-23 22:10:58,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 22:10:58,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56746 deadline: 1690151458163, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. 2023-07-23 22:10:58,164 WARN [Listener at localhost/42983] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46533 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 22:10:58,166 INFO [Listener at localhost/42983] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 22:10:58,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 22:10:58,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 22:10:58,167 INFO [Listener at localhost/42983] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34751, jenkins-hbase4.apache.org:34761, jenkins-hbase4.apache.org:39305, jenkins-hbase4.apache.org:40167], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 22:10:58,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 22:10:58,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46533] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 22:10:58,185 INFO [Listener at localhost/42983] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573 (was 573), OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=385 (was 385), ProcessCount=174 (was 174), AvailableMemoryMB=7694 (was 7682) - AvailableMemoryMB LEAK? - 2023-07-23 22:10:58,185 WARN [Listener at localhost/42983] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-23 22:10:58,185 INFO [Listener at localhost/42983] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 22:10:58,185 INFO [Listener at localhost/42983] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 22:10:58,185 DEBUG [Listener at localhost/42983] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x64cdcbec to 127.0.0.1:59587 2023-07-23 22:10:58,186 DEBUG [Listener at localhost/42983] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,186 DEBUG [Listener at localhost/42983] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 22:10:58,186 DEBUG [Listener at localhost/42983] util.JVMClusterUtil(257): Found active master hash=1507379481, stopped=false 2023-07-23 22:10:58,186 DEBUG [Listener at localhost/42983] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 22:10:58,186 DEBUG [Listener at localhost/42983] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 22:10:58,186 INFO [Listener at localhost/42983] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:58,189 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:58,189 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:58,189 INFO [Listener at localhost/42983] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 22:10:58,189 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:58,189 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:58,189 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:58,189 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 22:10:58,190 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:58,190 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:58,190 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:58,190 DEBUG [Listener at localhost/42983] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5161f6b0 to 127.0.0.1:59587 2023-07-23 22:10:58,190 DEBUG [Listener at localhost/42983] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:58,191 INFO [Listener at localhost/42983] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39305,1690150253107' ***** 2023-07-23 22:10:58,191 INFO [Listener at localhost/42983] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:58,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 22:10:58,191 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:58,191 INFO [Listener at localhost/42983] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40167,1690150253268' ***** 2023-07-23 22:10:58,191 INFO [Listener at localhost/42983] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:58,191 INFO [Listener at localhost/42983] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34751,1690150253426' ***** 2023-07-23 22:10:58,191 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:58,192 INFO [Listener at localhost/42983] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:58,192 INFO [Listener at localhost/42983] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34761,1690150255979' ***** 2023-07-23 22:10:58,193 INFO [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:58,194 INFO [Listener at localhost/42983] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 22:10:58,196 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:58,196 INFO [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:58,198 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:58,197 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:58,201 INFO [RS:1;jenkins-hbase4:40167] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2657b892{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:58,201 INFO [RS:0;jenkins-hbase4:39305] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1be9377a{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:58,201 INFO [RS:2;jenkins-hbase4:34751] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6c4d16d0{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:58,202 INFO [RS:3;jenkins-hbase4:34761] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@41a216b5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 22:10:58,202 INFO [RS:0;jenkins-hbase4:39305] server.AbstractConnector(383): Stopped ServerConnector@78d00b85{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:58,203 INFO [RS:0;jenkins-hbase4:39305] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:58,202 INFO [RS:1;jenkins-hbase4:40167] server.AbstractConnector(383): Stopped ServerConnector@748d39b9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:58,203 INFO [RS:3;jenkins-hbase4:34761] server.AbstractConnector(383): Stopped ServerConnector@3370d5e7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:58,203 INFO [RS:0;jenkins-hbase4:39305] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@50ec94a2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:58,202 INFO [RS:2;jenkins-hbase4:34751] server.AbstractConnector(383): Stopped ServerConnector@78819e6b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:58,203 INFO [RS:3;jenkins-hbase4:34761] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:58,203 INFO [RS:1;jenkins-hbase4:40167] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:58,204 INFO [RS:0;jenkins-hbase4:39305] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2c2d406c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:58,204 INFO [RS:2;jenkins-hbase4:34751] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:58,206 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:58,206 INFO [RS:0;jenkins-hbase4:39305] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:58,206 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:58,205 INFO [RS:1;jenkins-hbase4:40167] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@77270bc7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:58,205 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:58,205 INFO [RS:3;jenkins-hbase4:34761] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1f5cadfb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:58,208 INFO [RS:1;jenkins-hbase4:40167] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ace3e95{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:58,207 INFO [RS:0;jenkins-hbase4:39305] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:58,207 INFO [RS:2;jenkins-hbase4:34751] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3abd10aa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:58,208 INFO [RS:0;jenkins-hbase4:39305] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:58,208 INFO [RS:3;jenkins-hbase4:34761] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@552ab686{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:58,209 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(3305): Received CLOSE for 9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:58,209 INFO [RS:2;jenkins-hbase4:34751] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7aeb7de4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:58,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9bc77f5629571267f0470c706b598afe, disabling compactions & flushes 2023-07-23 22:10:58,209 INFO [RS:1;jenkins-hbase4:40167] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:58,210 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:58,210 INFO [RS:1;jenkins-hbase4:40167] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:58,210 INFO [RS:3;jenkins-hbase4:34761] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:58,210 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:58,210 INFO [RS:3;jenkins-hbase4:34761] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:58,210 INFO [RS:1;jenkins-hbase4:40167] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:58,210 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 22:10:58,210 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(3305): Received CLOSE for 993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:58,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:58,210 INFO [RS:2;jenkins-hbase4:34751] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 22:10:58,210 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:58,210 INFO [RS:2;jenkins-hbase4:34751] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 22:10:58,210 INFO [RS:2;jenkins-hbase4:34751] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:58,210 INFO [RS:3;jenkins-hbase4:34761] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 22:10:58,210 DEBUG [RS:0;jenkins-hbase4:39305] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1d40ca5e to 127.0.0.1:59587 2023-07-23 22:10:58,211 DEBUG [RS:0;jenkins-hbase4:39305] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,211 INFO [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:58,211 INFO [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:58,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 993313bd90f78641cfff3ee6df046b9d, disabling compactions & flushes 2023-07-23 22:10:58,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:58,210 DEBUG [RS:1;jenkins-hbase4:40167] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b2197b7 to 127.0.0.1:59587 2023-07-23 22:10:58,210 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. after waiting 0 ms 2023-07-23 22:10:58,211 DEBUG [RS:1;jenkins-hbase4:40167] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:58,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:58,211 DEBUG [RS:2;jenkins-hbase4:34751] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x30243ecb to 127.0.0.1:59587 2023-07-23 22:10:58,211 DEBUG [RS:3;jenkins-hbase4:34761] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x25c91e9a to 127.0.0.1:59587 2023-07-23 22:10:58,211 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 22:10:58,211 DEBUG [RS:3;jenkins-hbase4:34761] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,211 DEBUG [RS:2;jenkins-hbase4:34751] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,212 INFO [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34751,1690150253426; all regions closed. 2023-07-23 22:10:58,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. after waiting 0 ms 2023-07-23 22:10:58,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 9bc77f5629571267f0470c706b598afe 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-23 22:10:58,211 INFO [RS:1;jenkins-hbase4:40167] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:58,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:58,212 INFO [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34761,1690150255979; all regions closed. 2023-07-23 22:10:58,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 993313bd90f78641cfff3ee6df046b9d 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-23 22:10:58,211 DEBUG [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1478): Online Regions={9bc77f5629571267f0470c706b598afe=hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe.} 2023-07-23 22:10:58,212 INFO [RS:1;jenkins-hbase4:40167] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:58,212 INFO [RS:1;jenkins-hbase4:40167] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:58,212 DEBUG [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1504): Waiting on 9bc77f5629571267f0470c706b598afe 2023-07-23 22:10:58,212 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 22:10:58,213 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-23 22:10:58,213 DEBUG [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 993313bd90f78641cfff3ee6df046b9d=hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d.} 2023-07-23 22:10:58,213 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 22:10:58,213 DEBUG [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1504): Waiting on 1588230740, 993313bd90f78641cfff3ee6df046b9d 2023-07-23 22:10:58,213 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 22:10:58,214 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 22:10:58,214 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 22:10:58,214 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 22:10:58,214 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-23 22:10:58,220 DEBUG [RS:2;jenkins-hbase4:34751] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs 2023-07-23 22:10:58,220 INFO [RS:2;jenkins-hbase4:34751] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34751%2C1690150253426:(num 1690150254138) 2023-07-23 22:10:58,220 DEBUG [RS:2;jenkins-hbase4:34751] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,220 INFO [RS:2;jenkins-hbase4:34751] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:58,220 INFO [RS:2;jenkins-hbase4:34751] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:58,220 INFO [RS:2;jenkins-hbase4:34751] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:58,221 INFO [RS:2;jenkins-hbase4:34751] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:58,221 INFO [RS:2;jenkins-hbase4:34751] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:58,222 INFO [RS:2;jenkins-hbase4:34751] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34751 2023-07-23 22:10:58,224 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:58,227 DEBUG [RS:3;jenkins-hbase4:34761] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs 2023-07-23 22:10:58,228 INFO [RS:3;jenkins-hbase4:34761] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34761%2C1690150255979:(num 1690150256306) 2023-07-23 22:10:58,228 DEBUG [RS:3;jenkins-hbase4:34761] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,228 INFO [RS:3;jenkins-hbase4:34761] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:58,231 INFO [RS:3;jenkins-hbase4:34761] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:58,231 INFO [RS:3;jenkins-hbase4:34761] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:58,231 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:58,231 INFO [RS:3;jenkins-hbase4:34761] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:58,231 INFO [RS:3;jenkins-hbase4:34761] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:58,232 INFO [RS:3;jenkins-hbase4:34761] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34761 2023-07-23 22:10:58,255 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d/.tmp/m/f4bffe04de614ee6955be7f6a9d6de26 2023-07-23 22:10:58,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe/.tmp/info/5ba5e007fd954e4eb0dad1145c9a55d7 2023-07-23 22:10:58,259 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/.tmp/info/6af63c84478843028f1862b514a1b1ad 2023-07-23 22:10:58,264 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f4bffe04de614ee6955be7f6a9d6de26 2023-07-23 22:10:58,266 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d/.tmp/m/f4bffe04de614ee6955be7f6a9d6de26 as hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d/m/f4bffe04de614ee6955be7f6a9d6de26 2023-07-23 22:10:58,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5ba5e007fd954e4eb0dad1145c9a55d7 2023-07-23 22:10:58,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe/.tmp/info/5ba5e007fd954e4eb0dad1145c9a55d7 as hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe/info/5ba5e007fd954e4eb0dad1145c9a55d7 2023-07-23 22:10:58,268 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6af63c84478843028f1862b514a1b1ad 2023-07-23 22:10:58,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f4bffe04de614ee6955be7f6a9d6de26 2023-07-23 22:10:58,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d/m/f4bffe04de614ee6955be7f6a9d6de26, entries=12, sequenceid=29, filesize=5.4 K 2023-07-23 22:10:58,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 993313bd90f78641cfff3ee6df046b9d in 63ms, sequenceid=29, compaction requested=false 2023-07-23 22:10:58,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5ba5e007fd954e4eb0dad1145c9a55d7 2023-07-23 22:10:58,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe/info/5ba5e007fd954e4eb0dad1145c9a55d7, entries=3, sequenceid=9, filesize=5.0 K 2023-07-23 22:10:58,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 9bc77f5629571267f0470c706b598afe in 66ms, sequenceid=9, compaction requested=false 2023-07-23 22:10:58,281 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:58,281 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-23 22:10:58,281 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:58,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/rsgroup/993313bd90f78641cfff3ee6df046b9d/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-23 22:10:58,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/namespace/9bc77f5629571267f0470c706b598afe/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-23 22:10:58,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:58,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:58,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 993313bd90f78641cfff3ee6df046b9d: 2023-07-23 22:10:58,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690150254950.993313bd90f78641cfff3ee6df046b9d. 2023-07-23 22:10:58,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:58,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9bc77f5629571267f0470c706b598afe: 2023-07-23 22:10:58,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690150254700.9bc77f5629571267f0470c706b598afe. 2023-07-23 22:10:58,304 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/.tmp/rep_barrier/98f4675ffa4242d4a90828054e2c0a44 2023-07-23 22:10:58,310 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 98f4675ffa4242d4a90828054e2c0a44 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34761,1690150255979 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:58,316 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34751,1690150253426 2023-07-23 22:10:58,317 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34751,1690150253426] 2023-07-23 22:10:58,318 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34751,1690150253426; numProcessing=1 2023-07-23 22:10:58,319 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34751,1690150253426 already deleted, retry=false 2023-07-23 22:10:58,319 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34751,1690150253426 expired; onlineServers=3 2023-07-23 22:10:58,319 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34761,1690150255979] 2023-07-23 22:10:58,319 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34761,1690150255979; numProcessing=2 2023-07-23 22:10:58,326 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34761,1690150255979 already deleted, retry=false 2023-07-23 22:10:58,326 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34761,1690150255979 expired; onlineServers=2 2023-07-23 22:10:58,329 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/.tmp/table/34561d49749346df9ca8df5715f7a9f7 2023-07-23 22:10:58,335 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34561d49749346df9ca8df5715f7a9f7 2023-07-23 22:10:58,336 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/.tmp/info/6af63c84478843028f1862b514a1b1ad as hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/info/6af63c84478843028f1862b514a1b1ad 2023-07-23 22:10:58,342 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6af63c84478843028f1862b514a1b1ad 2023-07-23 22:10:58,342 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/info/6af63c84478843028f1862b514a1b1ad, entries=22, sequenceid=26, filesize=7.3 K 2023-07-23 22:10:58,343 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/.tmp/rep_barrier/98f4675ffa4242d4a90828054e2c0a44 as hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/rep_barrier/98f4675ffa4242d4a90828054e2c0a44 2023-07-23 22:10:58,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 98f4675ffa4242d4a90828054e2c0a44 2023-07-23 22:10:58,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/rep_barrier/98f4675ffa4242d4a90828054e2c0a44, entries=1, sequenceid=26, filesize=4.9 K 2023-07-23 22:10:58,350 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/.tmp/table/34561d49749346df9ca8df5715f7a9f7 as hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/table/34561d49749346df9ca8df5715f7a9f7 2023-07-23 22:10:58,355 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 34561d49749346df9ca8df5715f7a9f7 2023-07-23 22:10:58,355 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/table/34561d49749346df9ca8df5715f7a9f7, entries=6, sequenceid=26, filesize=5.1 K 2023-07-23 22:10:58,356 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 142ms, sequenceid=26, compaction requested=false 2023-07-23 22:10:58,365 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-23 22:10:58,366 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 22:10:58,366 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 22:10:58,367 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 22:10:58,367 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 22:10:58,412 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39305,1690150253107; all regions closed. 2023-07-23 22:10:58,413 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40167,1690150253268; all regions closed. 2023-07-23 22:10:58,420 DEBUG [RS:0;jenkins-hbase4:39305] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs 2023-07-23 22:10:58,420 INFO [RS:0;jenkins-hbase4:39305] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39305%2C1690150253107:(num 1690150254138) 2023-07-23 22:10:58,420 DEBUG [RS:0;jenkins-hbase4:39305] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,420 INFO [RS:0;jenkins-hbase4:39305] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:58,420 DEBUG [RS:1;jenkins-hbase4:40167] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs 2023-07-23 22:10:58,420 INFO [RS:0;jenkins-hbase4:39305] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:58,420 INFO [RS:1;jenkins-hbase4:40167] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40167%2C1690150253268.meta:.meta(num 1690150254621) 2023-07-23 22:10:58,421 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:58,421 INFO [RS:0;jenkins-hbase4:39305] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 22:10:58,421 INFO [RS:0;jenkins-hbase4:39305] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 22:10:58,421 INFO [RS:0;jenkins-hbase4:39305] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 22:10:58,422 INFO [RS:0;jenkins-hbase4:39305] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39305 2023-07-23 22:10:58,423 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:58,423 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:58,424 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39305,1690150253107] 2023-07-23 22:10:58,424 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39305,1690150253107 2023-07-23 22:10:58,424 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39305,1690150253107; numProcessing=3 2023-07-23 22:10:58,427 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39305,1690150253107 already deleted, retry=false 2023-07-23 22:10:58,427 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39305,1690150253107 expired; onlineServers=1 2023-07-23 22:10:58,429 DEBUG [RS:1;jenkins-hbase4:40167] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/oldWALs 2023-07-23 22:10:58,429 INFO [RS:1;jenkins-hbase4:40167] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40167%2C1690150253268:(num 1690150254134) 2023-07-23 22:10:58,429 DEBUG [RS:1;jenkins-hbase4:40167] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,429 INFO [RS:1;jenkins-hbase4:40167] regionserver.LeaseManager(133): Closed leases 2023-07-23 22:10:58,429 INFO [RS:1;jenkins-hbase4:40167] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 22:10:58,429 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:58,430 INFO [RS:1;jenkins-hbase4:40167] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40167 2023-07-23 22:10:58,433 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40167,1690150253268 2023-07-23 22:10:58,433 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 22:10:58,434 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40167,1690150253268] 2023-07-23 22:10:58,434 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40167,1690150253268; numProcessing=4 2023-07-23 22:10:58,435 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40167,1690150253268 already deleted, retry=false 2023-07-23 22:10:58,435 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40167,1690150253268 expired; onlineServers=0 2023-07-23 22:10:58,435 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46533,1690150252902' ***** 2023-07-23 22:10:58,435 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 22:10:58,435 DEBUG [M:0;jenkins-hbase4:46533] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5de746fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 22:10:58,436 INFO [M:0;jenkins-hbase4:46533] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 22:10:58,438 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 22:10:58,438 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 22:10:58,438 INFO [M:0;jenkins-hbase4:46533] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@760f93bc{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 22:10:58,439 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 22:10:58,439 INFO [M:0;jenkins-hbase4:46533] server.AbstractConnector(383): Stopped ServerConnector@779b2aaf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:58,439 INFO [M:0;jenkins-hbase4:46533] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 22:10:58,440 INFO [M:0;jenkins-hbase4:46533] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7e0de42c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 22:10:58,440 INFO [M:0;jenkins-hbase4:46533] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5514b40{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/hadoop.log.dir/,STOPPED} 2023-07-23 22:10:58,441 INFO [M:0;jenkins-hbase4:46533] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46533,1690150252902 2023-07-23 22:10:58,441 INFO [M:0;jenkins-hbase4:46533] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46533,1690150252902; all regions closed. 2023-07-23 22:10:58,441 DEBUG [M:0;jenkins-hbase4:46533] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 22:10:58,441 INFO [M:0;jenkins-hbase4:46533] master.HMaster(1491): Stopping master jetty server 2023-07-23 22:10:58,441 INFO [M:0;jenkins-hbase4:46533] server.AbstractConnector(383): Stopped ServerConnector@703dda43{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 22:10:58,442 DEBUG [M:0;jenkins-hbase4:46533] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 22:10:58,442 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 22:10:58,442 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150253855] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690150253855,5,FailOnTimeoutGroup] 2023-07-23 22:10:58,442 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150253855] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690150253855,5,FailOnTimeoutGroup] 2023-07-23 22:10:58,442 DEBUG [M:0;jenkins-hbase4:46533] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 22:10:58,442 INFO [M:0;jenkins-hbase4:46533] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 22:10:58,442 INFO [M:0;jenkins-hbase4:46533] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 22:10:58,443 INFO [M:0;jenkins-hbase4:46533] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-23 22:10:58,443 DEBUG [M:0;jenkins-hbase4:46533] master.HMaster(1512): Stopping service threads 2023-07-23 22:10:58,443 INFO [M:0;jenkins-hbase4:46533] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 22:10:58,443 ERROR [M:0;jenkins-hbase4:46533] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-23 22:10:58,443 INFO [M:0;jenkins-hbase4:46533] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 22:10:58,443 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 22:10:58,444 DEBUG [M:0;jenkins-hbase4:46533] zookeeper.ZKUtil(398): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 22:10:58,444 WARN [M:0;jenkins-hbase4:46533] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 22:10:58,444 INFO [M:0;jenkins-hbase4:46533] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 22:10:58,444 INFO [M:0;jenkins-hbase4:46533] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 22:10:58,444 DEBUG [M:0;jenkins-hbase4:46533] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 22:10:58,444 INFO [M:0;jenkins-hbase4:46533] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:58,444 DEBUG [M:0;jenkins-hbase4:46533] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:58,444 DEBUG [M:0;jenkins-hbase4:46533] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 22:10:58,444 DEBUG [M:0;jenkins-hbase4:46533] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:58,444 INFO [M:0;jenkins-hbase4:46533] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.22 KB heapSize=90.66 KB 2023-07-23 22:10:58,457 INFO [M:0;jenkins-hbase4:46533] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.22 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/641b78690cb24116a65c7ce7afe916a2 2023-07-23 22:10:58,462 DEBUG [M:0;jenkins-hbase4:46533] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/641b78690cb24116a65c7ce7afe916a2 as hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/641b78690cb24116a65c7ce7afe916a2 2023-07-23 22:10:58,467 INFO [M:0;jenkins-hbase4:46533] regionserver.HStore(1080): Added hdfs://localhost:41205/user/jenkins/test-data/5bb5c322-2321-d71a-73e6-899cc94a860a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/641b78690cb24116a65c7ce7afe916a2, entries=22, sequenceid=175, filesize=11.1 K 2023-07-23 22:10:58,467 INFO [M:0;jenkins-hbase4:46533] regionserver.HRegion(2948): Finished flush of dataSize ~76.22 KB/78050, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=175, compaction requested=false 2023-07-23 22:10:58,471 INFO [M:0;jenkins-hbase4:46533] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 22:10:58,471 DEBUG [M:0;jenkins-hbase4:46533] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 22:10:58,475 INFO [M:0;jenkins-hbase4:46533] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 22:10:58,475 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 22:10:58,476 INFO [M:0;jenkins-hbase4:46533] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46533 2023-07-23 22:10:58,479 DEBUG [M:0;jenkins-hbase4:46533] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46533,1690150252902 already deleted, retry=false 2023-07-23 22:10:58,788 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:58,788 INFO [M:0;jenkins-hbase4:46533] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46533,1690150252902; zookeeper connection closed. 2023-07-23 22:10:58,788 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): master:46533-0x101943cb4560000, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:58,888 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:58,889 INFO [RS:1;jenkins-hbase4:40167] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40167,1690150253268; zookeeper connection closed. 2023-07-23 22:10:58,889 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:40167-0x101943cb4560002, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:58,889 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3e5bb366] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3e5bb366 2023-07-23 22:10:58,989 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:58,989 INFO [RS:0;jenkins-hbase4:39305] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39305,1690150253107; zookeeper connection closed. 2023-07-23 22:10:58,989 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:39305-0x101943cb4560001, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:58,989 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4aa39a5a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4aa39a5a 2023-07-23 22:10:59,089 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:59,089 INFO [RS:3;jenkins-hbase4:34761] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34761,1690150255979; zookeeper connection closed. 2023-07-23 22:10:59,089 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34761-0x101943cb456000b, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:59,090 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@399b29ea] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@399b29ea 2023-07-23 22:10:59,189 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:59,190 DEBUG [Listener at localhost/42983-EventThread] zookeeper.ZKWatcher(600): regionserver:34751-0x101943cb4560003, quorum=127.0.0.1:59587, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 22:10:59,189 INFO [RS:2;jenkins-hbase4:34751] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34751,1690150253426; zookeeper connection closed. 2023-07-23 22:10:59,190 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@23b4430] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@23b4430 2023-07-23 22:10:59,190 INFO [Listener at localhost/42983] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-23 22:10:59,190 WARN [Listener at localhost/42983] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 22:10:59,195 INFO [Listener at localhost/42983] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:59,299 WARN [BP-345229045-172.31.14.131-1690150252196 heartbeating to localhost/127.0.0.1:41205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 22:10:59,299 WARN [BP-345229045-172.31.14.131-1690150252196 heartbeating to localhost/127.0.0.1:41205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-345229045-172.31.14.131-1690150252196 (Datanode Uuid 097813cf-8d7d-4630-8800-f6f6727d195d) service to localhost/127.0.0.1:41205 2023-07-23 22:10:59,300 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data5/current/BP-345229045-172.31.14.131-1690150252196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:59,300 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data6/current/BP-345229045-172.31.14.131-1690150252196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:59,301 WARN [Listener at localhost/42983] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 22:10:59,303 INFO [Listener at localhost/42983] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:59,407 WARN [BP-345229045-172.31.14.131-1690150252196 heartbeating to localhost/127.0.0.1:41205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 22:10:59,407 WARN [BP-345229045-172.31.14.131-1690150252196 heartbeating to localhost/127.0.0.1:41205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-345229045-172.31.14.131-1690150252196 (Datanode Uuid d076bb00-a6b7-4c52-a58f-fd85294ad85d) service to localhost/127.0.0.1:41205 2023-07-23 22:10:59,408 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data3/current/BP-345229045-172.31.14.131-1690150252196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:59,408 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data4/current/BP-345229045-172.31.14.131-1690150252196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:59,409 WARN [Listener at localhost/42983] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 22:10:59,412 INFO [Listener at localhost/42983] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:59,515 WARN [BP-345229045-172.31.14.131-1690150252196 heartbeating to localhost/127.0.0.1:41205] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 22:10:59,515 WARN [BP-345229045-172.31.14.131-1690150252196 heartbeating to localhost/127.0.0.1:41205] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-345229045-172.31.14.131-1690150252196 (Datanode Uuid 1dd5e8ad-3667-4b05-b1eb-ed80e5875453) service to localhost/127.0.0.1:41205 2023-07-23 22:10:59,516 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data1/current/BP-345229045-172.31.14.131-1690150252196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:59,516 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/297e934c-f75c-1f74-01cb-277ba5b98136/cluster_5f5b6979-1dfe-ef14-d7d1-2dc0f605f07c/dfs/data/data2/current/BP-345229045-172.31.14.131-1690150252196] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 22:10:59,526 INFO [Listener at localhost/42983] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 22:10:59,641 INFO [Listener at localhost/42983] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 22:10:59,666 INFO [Listener at localhost/42983] hbase.HBaseTestingUtility(1293): Minicluster is down