2023-07-23 21:10:22,261 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd 2023-07-23 21:10:22,279 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-23 21:10:22,301 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 21:10:22,301 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37, deleteOnExit=true 2023-07-23 21:10:22,301 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 21:10:22,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/test.cache.data in system properties and HBase conf 2023-07-23 21:10:22,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 21:10:22,303 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir in system properties and HBase conf 2023-07-23 21:10:22,303 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 21:10:22,304 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 21:10:22,304 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 21:10:22,421 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-23 21:10:22,842 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 21:10:22,846 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 21:10:22,847 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 21:10:22,847 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 21:10:22,847 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 21:10:22,848 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 21:10:22,848 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 21:10:22,849 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 21:10:22,849 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 21:10:22,849 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 21:10:22,850 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/nfs.dump.dir in system properties and HBase conf 2023-07-23 21:10:22,850 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir in system properties and HBase conf 2023-07-23 21:10:22,850 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 21:10:22,851 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 21:10:22,851 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 21:10:23,403 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 21:10:23,407 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 21:10:23,703 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-23 21:10:23,905 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-23 21:10:23,924 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:23,959 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:23,995 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir/Jetty_localhost_35313_hdfs____k27wkd/webapp 2023-07-23 21:10:24,143 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35313 2023-07-23 21:10:24,153 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 21:10:24,153 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 21:10:24,641 WARN [Listener at localhost/46635] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:24,708 WARN [Listener at localhost/46635] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:24,731 WARN [Listener at localhost/46635] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:24,739 INFO [Listener at localhost/46635] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:24,744 INFO [Listener at localhost/46635] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir/Jetty_localhost_42003_datanode____.iay0mc/webapp 2023-07-23 21:10:24,870 INFO [Listener at localhost/46635] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42003 2023-07-23 21:10:25,274 WARN [Listener at localhost/44873] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:25,370 WARN [Listener at localhost/44873] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:25,376 WARN [Listener at localhost/44873] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:25,379 INFO [Listener at localhost/44873] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:25,385 INFO [Listener at localhost/44873] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir/Jetty_localhost_39025_datanode____fuhak4/webapp 2023-07-23 21:10:25,520 INFO [Listener at localhost/44873] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39025 2023-07-23 21:10:25,533 WARN [Listener at localhost/41441] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:25,571 WARN [Listener at localhost/41441] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:25,574 WARN [Listener at localhost/41441] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:25,577 INFO [Listener at localhost/41441] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:25,584 INFO [Listener at localhost/41441] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir/Jetty_localhost_38837_datanode____.7esz9e/webapp 2023-07-23 21:10:25,750 INFO [Listener at localhost/41441] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38837 2023-07-23 21:10:25,766 WARN [Listener at localhost/39787] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:25,809 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xebb6f96877cf938f: Processing first storage report for DS-4042b899-9f8a-4d07-a83a-8d95f65f4040 from datanode a91c3a38-048b-499b-8a33-9d8067b682fa 2023-07-23 21:10:25,811 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xebb6f96877cf938f: from storage DS-4042b899-9f8a-4d07-a83a-8d95f65f4040 node DatanodeRegistration(127.0.0.1:42041, datanodeUuid=a91c3a38-048b-499b-8a33-9d8067b682fa, infoPort=40019, infoSecurePort=0, ipcPort=41441, storageInfo=lv=-57;cid=testClusterID;nsid=547844546;c=1690146623480), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-23 21:10:25,811 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x78cf75874f293927: Processing first storage report for DS-b1d9e206-f83d-4afe-9987-b587ccde3809 from datanode 0d41bf30-2479-45be-a34b-ccd64f7ddc57 2023-07-23 21:10:25,811 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x78cf75874f293927: from storage DS-b1d9e206-f83d-4afe-9987-b587ccde3809 node DatanodeRegistration(127.0.0.1:34733, datanodeUuid=0d41bf30-2479-45be-a34b-ccd64f7ddc57, infoPort=39913, infoSecurePort=0, ipcPort=44873, storageInfo=lv=-57;cid=testClusterID;nsid=547844546;c=1690146623480), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:25,811 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xebb6f96877cf938f: Processing first storage report for DS-cdf569a7-8898-498d-aa1d-5343ead0d6da from datanode a91c3a38-048b-499b-8a33-9d8067b682fa 2023-07-23 21:10:25,811 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xebb6f96877cf938f: from storage DS-cdf569a7-8898-498d-aa1d-5343ead0d6da node DatanodeRegistration(127.0.0.1:42041, datanodeUuid=a91c3a38-048b-499b-8a33-9d8067b682fa, infoPort=40019, infoSecurePort=0, ipcPort=41441, storageInfo=lv=-57;cid=testClusterID;nsid=547844546;c=1690146623480), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 21:10:25,812 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x78cf75874f293927: Processing first storage report for DS-349c2a3c-d4e2-4af3-925d-52d7f3c1a91f from datanode 0d41bf30-2479-45be-a34b-ccd64f7ddc57 2023-07-23 21:10:25,812 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x78cf75874f293927: from storage DS-349c2a3c-d4e2-4af3-925d-52d7f3c1a91f node DatanodeRegistration(127.0.0.1:34733, datanodeUuid=0d41bf30-2479-45be-a34b-ccd64f7ddc57, infoPort=39913, infoSecurePort=0, ipcPort=44873, storageInfo=lv=-57;cid=testClusterID;nsid=547844546;c=1690146623480), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:25,922 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x30d36be981092057: Processing first storage report for DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10 from datanode 948374e2-cd86-40c5-bf32-7f54f38f83f4 2023-07-23 21:10:25,923 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x30d36be981092057: from storage DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10 node DatanodeRegistration(127.0.0.1:35769, datanodeUuid=948374e2-cd86-40c5-bf32-7f54f38f83f4, infoPort=37145, infoSecurePort=0, ipcPort=39787, storageInfo=lv=-57;cid=testClusterID;nsid=547844546;c=1690146623480), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 21:10:25,923 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x30d36be981092057: Processing first storage report for DS-959ca5dd-5e01-4f19-87c7-a4c9bf2770fd from datanode 948374e2-cd86-40c5-bf32-7f54f38f83f4 2023-07-23 21:10:25,923 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x30d36be981092057: from storage DS-959ca5dd-5e01-4f19-87c7-a4c9bf2770fd node DatanodeRegistration(127.0.0.1:35769, datanodeUuid=948374e2-cd86-40c5-bf32-7f54f38f83f4, infoPort=37145, infoSecurePort=0, ipcPort=39787, storageInfo=lv=-57;cid=testClusterID;nsid=547844546;c=1690146623480), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:26,223 DEBUG [Listener at localhost/39787] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd 2023-07-23 21:10:26,309 INFO [Listener at localhost/39787] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/zookeeper_0, clientPort=59206, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 21:10:26,325 INFO [Listener at localhost/39787] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59206 2023-07-23 21:10:26,333 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:26,336 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:27,134 INFO [Listener at localhost/39787] util.FSUtils(471): Created version file at hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a with version=8 2023-07-23 21:10:27,135 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/hbase-staging 2023-07-23 21:10:27,146 DEBUG [Listener at localhost/39787] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 21:10:27,146 DEBUG [Listener at localhost/39787] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 21:10:27,147 DEBUG [Listener at localhost/39787] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 21:10:27,147 DEBUG [Listener at localhost/39787] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 21:10:27,513 INFO [Listener at localhost/39787] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-23 21:10:28,047 INFO [Listener at localhost/39787] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:28,088 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:28,089 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:28,089 INFO [Listener at localhost/39787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:28,089 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:28,089 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:28,266 INFO [Listener at localhost/39787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:28,368 DEBUG [Listener at localhost/39787] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-23 21:10:28,491 INFO [Listener at localhost/39787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46113 2023-07-23 21:10:28,508 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:28,511 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:28,536 INFO [Listener at localhost/39787] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46113 connecting to ZooKeeper ensemble=127.0.0.1:59206 2023-07-23 21:10:28,580 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:461130x0, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:28,583 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46113-0x10194055df50000 connected 2023-07-23 21:10:28,623 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:28,624 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:28,627 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:28,636 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46113 2023-07-23 21:10:28,637 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46113 2023-07-23 21:10:28,637 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46113 2023-07-23 21:10:28,637 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46113 2023-07-23 21:10:28,638 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46113 2023-07-23 21:10:28,673 INFO [Listener at localhost/39787] log.Log(170): Logging initialized @7107ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-23 21:10:28,813 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:28,814 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:28,815 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:28,816 INFO [Listener at localhost/39787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 21:10:28,817 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:28,817 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:28,820 INFO [Listener at localhost/39787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:28,884 INFO [Listener at localhost/39787] http.HttpServer(1146): Jetty bound to port 42575 2023-07-23 21:10:28,885 INFO [Listener at localhost/39787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:28,922 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:28,926 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7410039f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:28,927 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:28,927 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34fd62ed{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:29,123 INFO [Listener at localhost/39787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:29,136 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:29,137 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:29,139 INFO [Listener at localhost/39787] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:29,145 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,171 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@55ffcf1a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir/jetty-0_0_0_0-42575-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4752391756876135983/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:10:29,184 INFO [Listener at localhost/39787] server.AbstractConnector(333): Started ServerConnector@2092751{HTTP/1.1, (http/1.1)}{0.0.0.0:42575} 2023-07-23 21:10:29,185 INFO [Listener at localhost/39787] server.Server(415): Started @7618ms 2023-07-23 21:10:29,189 INFO [Listener at localhost/39787] master.HMaster(444): hbase.rootdir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a, hbase.cluster.distributed=false 2023-07-23 21:10:29,260 INFO [Listener at localhost/39787] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:29,260 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:29,260 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:29,260 INFO [Listener at localhost/39787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:29,261 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:29,261 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:29,266 INFO [Listener at localhost/39787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:29,269 INFO [Listener at localhost/39787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34893 2023-07-23 21:10:29,271 INFO [Listener at localhost/39787] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:29,278 DEBUG [Listener at localhost/39787] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:29,280 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:29,282 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:29,285 INFO [Listener at localhost/39787] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34893 connecting to ZooKeeper ensemble=127.0.0.1:59206 2023-07-23 21:10:29,291 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:348930x0, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:29,292 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34893-0x10194055df50001 connected 2023-07-23 21:10:29,293 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:29,294 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:29,295 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:29,296 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34893 2023-07-23 21:10:29,296 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34893 2023-07-23 21:10:29,296 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34893 2023-07-23 21:10:29,297 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34893 2023-07-23 21:10:29,297 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34893 2023-07-23 21:10:29,299 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:29,299 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:29,299 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:29,300 INFO [Listener at localhost/39787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:29,301 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:29,301 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:29,301 INFO [Listener at localhost/39787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:29,303 INFO [Listener at localhost/39787] http.HttpServer(1146): Jetty bound to port 40765 2023-07-23 21:10:29,303 INFO [Listener at localhost/39787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:29,306 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,306 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10630bfe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:29,306 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,307 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@282c1c14{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:29,430 INFO [Listener at localhost/39787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:29,432 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:29,432 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:29,432 INFO [Listener at localhost/39787] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:29,434 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,439 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6770849c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir/jetty-0_0_0_0-40765-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6581763015032597020/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:29,441 INFO [Listener at localhost/39787] server.AbstractConnector(333): Started ServerConnector@24b5075d{HTTP/1.1, (http/1.1)}{0.0.0.0:40765} 2023-07-23 21:10:29,441 INFO [Listener at localhost/39787] server.Server(415): Started @7875ms 2023-07-23 21:10:29,456 INFO [Listener at localhost/39787] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:29,456 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:29,456 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:29,457 INFO [Listener at localhost/39787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:29,457 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:29,457 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:29,458 INFO [Listener at localhost/39787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:29,460 INFO [Listener at localhost/39787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46093 2023-07-23 21:10:29,460 INFO [Listener at localhost/39787] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:29,462 DEBUG [Listener at localhost/39787] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:29,464 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:29,466 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:29,467 INFO [Listener at localhost/39787] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46093 connecting to ZooKeeper ensemble=127.0.0.1:59206 2023-07-23 21:10:29,471 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:460930x0, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:29,472 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46093-0x10194055df50002 connected 2023-07-23 21:10:29,473 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:29,474 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:29,476 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:29,476 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46093 2023-07-23 21:10:29,476 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46093 2023-07-23 21:10:29,477 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46093 2023-07-23 21:10:29,477 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46093 2023-07-23 21:10:29,477 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46093 2023-07-23 21:10:29,483 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:29,483 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:29,483 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:29,484 INFO [Listener at localhost/39787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:29,484 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:29,484 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:29,484 INFO [Listener at localhost/39787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:29,485 INFO [Listener at localhost/39787] http.HttpServer(1146): Jetty bound to port 46291 2023-07-23 21:10:29,485 INFO [Listener at localhost/39787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:29,497 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,498 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@539fa719{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:29,498 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,499 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2c8a8cb2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:29,634 INFO [Listener at localhost/39787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:29,635 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:29,635 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:29,635 INFO [Listener at localhost/39787] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:10:29,636 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,637 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@46ffcd75{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir/jetty-0_0_0_0-46291-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6576935942766606597/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:29,638 INFO [Listener at localhost/39787] server.AbstractConnector(333): Started ServerConnector@7f6e0343{HTTP/1.1, (http/1.1)}{0.0.0.0:46291} 2023-07-23 21:10:29,638 INFO [Listener at localhost/39787] server.Server(415): Started @8072ms 2023-07-23 21:10:29,651 INFO [Listener at localhost/39787] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:29,651 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:29,651 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:29,651 INFO [Listener at localhost/39787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:29,651 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:29,651 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:29,652 INFO [Listener at localhost/39787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:29,653 INFO [Listener at localhost/39787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37385 2023-07-23 21:10:29,653 INFO [Listener at localhost/39787] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:29,655 DEBUG [Listener at localhost/39787] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:29,656 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:29,658 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:29,660 INFO [Listener at localhost/39787] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37385 connecting to ZooKeeper ensemble=127.0.0.1:59206 2023-07-23 21:10:29,664 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:373850x0, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:29,666 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37385-0x10194055df50003 connected 2023-07-23 21:10:29,666 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:29,666 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:29,667 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:29,669 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37385 2023-07-23 21:10:29,670 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37385 2023-07-23 21:10:29,670 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37385 2023-07-23 21:10:29,674 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37385 2023-07-23 21:10:29,674 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37385 2023-07-23 21:10:29,676 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:29,676 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:29,677 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:29,677 INFO [Listener at localhost/39787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:29,677 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:29,677 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:29,678 INFO [Listener at localhost/39787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:29,679 INFO [Listener at localhost/39787] http.HttpServer(1146): Jetty bound to port 45819 2023-07-23 21:10:29,679 INFO [Listener at localhost/39787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:29,684 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,684 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67c48ba9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:29,685 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,685 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5109bb49{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:29,811 INFO [Listener at localhost/39787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:29,812 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:29,812 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:29,812 INFO [Listener at localhost/39787] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:29,814 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:29,815 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2f709731{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir/jetty-0_0_0_0-45819-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8183320027020657529/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:29,816 INFO [Listener at localhost/39787] server.AbstractConnector(333): Started ServerConnector@206a46fd{HTTP/1.1, (http/1.1)}{0.0.0.0:45819} 2023-07-23 21:10:29,816 INFO [Listener at localhost/39787] server.Server(415): Started @8250ms 2023-07-23 21:10:29,822 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:29,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@87ecd56{HTTP/1.1, (http/1.1)}{0.0.0.0:40369} 2023-07-23 21:10:29,826 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8260ms 2023-07-23 21:10:29,826 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:29,837 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:10:29,839 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:29,858 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:29,858 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:29,858 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:29,858 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:29,858 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:29,860 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:10:29,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46113,1690146627323 from backup master directory 2023-07-23 21:10:29,861 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:10:29,866 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:29,867 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:10:29,867 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:29,867 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:29,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-23 21:10:29,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-23 21:10:29,974 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/hbase.id with ID: c212240f-ef04-43d0-ba3e-08f5b0046088 2023-07-23 21:10:30,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:30,037 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:30,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x41aea838 to 127.0.0.1:59206 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:30,135 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ee020b6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:30,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:30,163 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 21:10:30,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-23 21:10:30,184 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-23 21:10:30,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:10:30,191 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:10:30,192 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:30,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/data/master/store-tmp 2023-07-23 21:10:30,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:30,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:10:30,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:30,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:30,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:10:30,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:30,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:30,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:10:30,341 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/WALs/jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:30,365 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46113%2C1690146627323, suffix=, logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/WALs/jenkins-hbase4.apache.org,46113,1690146627323, archiveDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/oldWALs, maxLogs=10 2023-07-23 21:10:30,427 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK] 2023-07-23 21:10:30,427 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK] 2023-07-23 21:10:30,427 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK] 2023-07-23 21:10:30,436 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:10:30,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/WALs/jenkins-hbase4.apache.org,46113,1690146627323/jenkins-hbase4.apache.org%2C46113%2C1690146627323.1690146630375 2023-07-23 21:10:30,515 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK], DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK], DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK]] 2023-07-23 21:10:30,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:30,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:30,520 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:30,522 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:30,597 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:30,605 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 21:10:30,640 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 21:10:30,653 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:30,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:30,660 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:30,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:30,679 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:30,679 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12037880000, jitterRate=0.12111493945121765}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:30,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:10:30,681 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 21:10:30,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 21:10:30,705 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 21:10:30,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 21:10:30,709 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-23 21:10:30,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 43 msec 2023-07-23 21:10:30,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 21:10:30,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 21:10:30,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 21:10:30,798 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 21:10:30,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 21:10:30,809 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 21:10:30,811 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:30,812 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 21:10:30,813 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 21:10:30,826 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 21:10:30,831 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:30,831 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:30,831 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:30,831 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:30,831 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:30,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46113,1690146627323, sessionid=0x10194055df50000, setting cluster-up flag (Was=false) 2023-07-23 21:10:30,850 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:30,856 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 21:10:30,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:30,864 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:30,870 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 21:10:30,871 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:30,874 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.hbase-snapshot/.tmp 2023-07-23 21:10:30,921 INFO [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(951): ClusterId : c212240f-ef04-43d0-ba3e-08f5b0046088 2023-07-23 21:10:30,921 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(951): ClusterId : c212240f-ef04-43d0-ba3e-08f5b0046088 2023-07-23 21:10:30,921 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(951): ClusterId : c212240f-ef04-43d0-ba3e-08f5b0046088 2023-07-23 21:10:30,927 DEBUG [RS:0;jenkins-hbase4:34893] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:30,927 DEBUG [RS:1;jenkins-hbase4:46093] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:30,927 DEBUG [RS:2;jenkins-hbase4:37385] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:30,934 DEBUG [RS:0;jenkins-hbase4:34893] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:30,934 DEBUG [RS:1;jenkins-hbase4:46093] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:30,934 DEBUG [RS:2;jenkins-hbase4:37385] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:30,934 DEBUG [RS:1;jenkins-hbase4:46093] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:30,934 DEBUG [RS:0;jenkins-hbase4:34893] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:30,935 DEBUG [RS:2;jenkins-hbase4:37385] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:30,939 DEBUG [RS:1;jenkins-hbase4:46093] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:30,939 DEBUG [RS:2;jenkins-hbase4:37385] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:30,939 DEBUG [RS:0;jenkins-hbase4:34893] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:30,942 DEBUG [RS:1;jenkins-hbase4:46093] zookeeper.ReadOnlyZKClient(139): Connect 0x7a0f2748 to 127.0.0.1:59206 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:30,942 DEBUG [RS:2;jenkins-hbase4:37385] zookeeper.ReadOnlyZKClient(139): Connect 0x62bfe45b to 127.0.0.1:59206 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:30,942 DEBUG [RS:0;jenkins-hbase4:34893] zookeeper.ReadOnlyZKClient(139): Connect 0x660e7f25 to 127.0.0.1:59206 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:30,950 DEBUG [RS:1;jenkins-hbase4:46093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f0db66d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:30,950 DEBUG [RS:1;jenkins-hbase4:46093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7551bb38, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:30,951 DEBUG [RS:0;jenkins-hbase4:34893] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1773511e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:30,951 DEBUG [RS:2;jenkins-hbase4:37385] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a4e2241, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:30,951 DEBUG [RS:0;jenkins-hbase4:34893] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@136b20c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:30,951 DEBUG [RS:2;jenkins-hbase4:37385] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1a44c1f6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:30,957 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 21:10:30,968 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 21:10:30,974 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:30,977 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 21:10:30,977 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 21:10:30,982 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46093 2023-07-23 21:10:30,982 DEBUG [RS:0;jenkins-hbase4:34893] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:34893 2023-07-23 21:10:30,984 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:37385 2023-07-23 21:10:30,990 INFO [RS:0;jenkins-hbase4:34893] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:30,990 INFO [RS:2;jenkins-hbase4:37385] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:30,991 INFO [RS:2;jenkins-hbase4:37385] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:30,990 INFO [RS:1;jenkins-hbase4:46093] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:30,992 INFO [RS:1;jenkins-hbase4:46093] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:30,992 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:30,991 INFO [RS:0;jenkins-hbase4:34893] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:30,992 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:30,992 DEBUG [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:30,995 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46113,1690146627323 with isa=jenkins-hbase4.apache.org/172.31.14.131:46093, startcode=1690146629455 2023-07-23 21:10:30,995 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46113,1690146627323 with isa=jenkins-hbase4.apache.org/172.31.14.131:37385, startcode=1690146629650 2023-07-23 21:10:30,995 INFO [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46113,1690146627323 with isa=jenkins-hbase4.apache.org/172.31.14.131:34893, startcode=1690146629259 2023-07-23 21:10:31,020 DEBUG [RS:1;jenkins-hbase4:46093] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:31,020 DEBUG [RS:2;jenkins-hbase4:37385] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:31,020 DEBUG [RS:0;jenkins-hbase4:34893] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:31,086 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 21:10:31,095 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33891, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:31,095 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57427, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:31,095 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42963, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:31,106 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:31,118 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:31,120 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:31,134 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:10:31,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:10:31,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:10:31,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:10:31,140 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 21:10:31,140 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 21:10:31,141 WARN [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 21:10:31,140 DEBUG [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 21:10:31,141 WARN [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 21:10:31,141 WARN [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 21:10:31,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:31,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:31,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:31,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:31,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 21:10:31,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,143 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:31,143 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690146661144 2023-07-23 21:10:31,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 21:10:31,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 21:10:31,151 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 21:10:31,152 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 21:10:31,155 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:31,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 21:10:31,160 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 21:10:31,160 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 21:10:31,160 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 21:10:31,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,163 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 21:10:31,165 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 21:10:31,165 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 21:10:31,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 21:10:31,170 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 21:10:31,174 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146631172,5,FailOnTimeoutGroup] 2023-07-23 21:10:31,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146631175,5,FailOnTimeoutGroup] 2023-07-23 21:10:31,179 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,179 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 21:10:31,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,226 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:31,228 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:31,228 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a 2023-07-23 21:10:31,245 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46113,1690146627323 with isa=jenkins-hbase4.apache.org/172.31.14.131:46093, startcode=1690146629455 2023-07-23 21:10:31,245 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46113,1690146627323 with isa=jenkins-hbase4.apache.org/172.31.14.131:37385, startcode=1690146629650 2023-07-23 21:10:31,245 INFO [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46113,1690146627323 with isa=jenkins-hbase4.apache.org/172.31.14.131:34893, startcode=1690146629259 2023-07-23 21:10:31,254 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46113] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:31,254 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:31,256 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:31,256 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 21:10:31,257 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:10:31,259 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/info 2023-07-23 21:10:31,260 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:10:31,261 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:31,261 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:10:31,263 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46113] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,263 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:31,264 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 21:10:31,264 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a 2023-07-23 21:10:31,264 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46635 2023-07-23 21:10:31,264 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42575 2023-07-23 21:10:31,265 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46113] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:31,265 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:31,265 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 21:10:31,266 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a 2023-07-23 21:10:31,266 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46635 2023-07-23 21:10:31,266 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42575 2023-07-23 21:10:31,268 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:10:31,269 DEBUG [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a 2023-07-23 21:10:31,269 DEBUG [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46635 2023-07-23 21:10:31,269 DEBUG [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42575 2023-07-23 21:10:31,269 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:10:31,270 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:31,270 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:10:31,273 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/table 2023-07-23 21:10:31,274 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:10:31,275 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:31,276 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740 2023-07-23 21:10:31,277 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:31,278 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740 2023-07-23 21:10:31,279 DEBUG [RS:2;jenkins-hbase4:37385] zookeeper.ZKUtil(162): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:31,279 DEBUG [RS:0;jenkins-hbase4:34893] zookeeper.ZKUtil(162): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:31,279 WARN [RS:2;jenkins-hbase4:37385] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:31,279 WARN [RS:0;jenkins-hbase4:34893] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:31,280 INFO [RS:0;jenkins-hbase4:34893] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:31,280 DEBUG [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:31,287 INFO [RS:2;jenkins-hbase4:37385] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:31,288 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:31,288 DEBUG [RS:1;jenkins-hbase4:46093] zookeeper.ZKUtil(162): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,288 WARN [RS:1;jenkins-hbase4:46093] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:31,288 INFO [RS:1;jenkins-hbase4:46093] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:31,288 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,290 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37385,1690146629650] 2023-07-23 21:10:31,290 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34893,1690146629259] 2023-07-23 21:10:31,290 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46093,1690146629455] 2023-07-23 21:10:31,304 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:10:31,308 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:10:31,309 DEBUG [RS:1;jenkins-hbase4:46093] zookeeper.ZKUtil(162): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:31,309 DEBUG [RS:2;jenkins-hbase4:37385] zookeeper.ZKUtil(162): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:31,310 DEBUG [RS:0;jenkins-hbase4:34893] zookeeper.ZKUtil(162): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:31,310 DEBUG [RS:1;jenkins-hbase4:46093] zookeeper.ZKUtil(162): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:31,311 DEBUG [RS:0;jenkins-hbase4:34893] zookeeper.ZKUtil(162): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:31,311 DEBUG [RS:2;jenkins-hbase4:37385] zookeeper.ZKUtil(162): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:31,311 DEBUG [RS:1;jenkins-hbase4:46093] zookeeper.ZKUtil(162): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,312 DEBUG [RS:2;jenkins-hbase4:37385] zookeeper.ZKUtil(162): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,312 DEBUG [RS:0;jenkins-hbase4:34893] zookeeper.ZKUtil(162): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,322 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:31,327 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10234598400, jitterRate=-0.0468287467956543}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:10:31,328 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:31,328 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:10:31,328 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:31,328 DEBUG [RS:0;jenkins-hbase4:34893] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:31,329 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:10:31,329 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:10:31,329 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:10:31,329 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:10:31,329 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:10:31,330 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:10:31,330 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:10:31,337 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 21:10:31,337 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 21:10:31,343 INFO [RS:1;jenkins-hbase4:46093] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:31,343 INFO [RS:0;jenkins-hbase4:34893] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:31,343 INFO [RS:2;jenkins-hbase4:37385] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:31,347 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 21:10:31,365 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 21:10:31,368 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 21:10:31,371 INFO [RS:1;jenkins-hbase4:46093] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:31,373 INFO [RS:2;jenkins-hbase4:37385] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:31,373 INFO [RS:0;jenkins-hbase4:34893] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:31,377 INFO [RS:2;jenkins-hbase4:37385] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:31,377 INFO [RS:0;jenkins-hbase4:34893] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:31,377 INFO [RS:2;jenkins-hbase4:37385] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,377 INFO [RS:1;jenkins-hbase4:46093] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:31,378 INFO [RS:0;jenkins-hbase4:34893] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,378 INFO [RS:1;jenkins-hbase4:46093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,379 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:31,382 INFO [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:31,383 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:31,391 INFO [RS:1;jenkins-hbase4:46093] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,391 INFO [RS:2;jenkins-hbase4:37385] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,391 INFO [RS:0;jenkins-hbase4:34893] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,391 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:31,392 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,392 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:31,392 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:1;jenkins-hbase4:46093] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:0;jenkins-hbase4:34893] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,393 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:31,394 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,394 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,394 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,394 DEBUG [RS:2;jenkins-hbase4:37385] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:31,398 INFO [RS:1;jenkins-hbase4:46093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,399 INFO [RS:1;jenkins-hbase4:46093] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,399 INFO [RS:1;jenkins-hbase4:46093] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,399 INFO [RS:2;jenkins-hbase4:37385] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,399 INFO [RS:0;jenkins-hbase4:34893] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,399 INFO [RS:2;jenkins-hbase4:37385] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,399 INFO [RS:0;jenkins-hbase4:34893] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,399 INFO [RS:2;jenkins-hbase4:37385] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,399 INFO [RS:0;jenkins-hbase4:34893] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,417 INFO [RS:0;jenkins-hbase4:34893] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:31,417 INFO [RS:1;jenkins-hbase4:46093] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:31,417 INFO [RS:2;jenkins-hbase4:37385] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:31,421 INFO [RS:0;jenkins-hbase4:34893] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34893,1690146629259-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,421 INFO [RS:2;jenkins-hbase4:37385] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37385,1690146629650-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,421 INFO [RS:1;jenkins-hbase4:46093] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46093,1690146629455-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:31,450 INFO [RS:1;jenkins-hbase4:46093] regionserver.Replication(203): jenkins-hbase4.apache.org,46093,1690146629455 started 2023-07-23 21:10:31,450 INFO [RS:2;jenkins-hbase4:37385] regionserver.Replication(203): jenkins-hbase4.apache.org,37385,1690146629650 started 2023-07-23 21:10:31,450 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46093,1690146629455, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46093, sessionid=0x10194055df50002 2023-07-23 21:10:31,450 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37385,1690146629650, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37385, sessionid=0x10194055df50003 2023-07-23 21:10:31,451 INFO [RS:0;jenkins-hbase4:34893] regionserver.Replication(203): jenkins-hbase4.apache.org,34893,1690146629259 started 2023-07-23 21:10:31,451 DEBUG [RS:2;jenkins-hbase4:37385] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:31,451 INFO [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34893,1690146629259, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34893, sessionid=0x10194055df50001 2023-07-23 21:10:31,451 DEBUG [RS:1;jenkins-hbase4:46093] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:31,451 DEBUG [RS:0;jenkins-hbase4:34893] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:31,452 DEBUG [RS:0;jenkins-hbase4:34893] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:31,451 DEBUG [RS:2;jenkins-hbase4:37385] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:31,452 DEBUG [RS:0;jenkins-hbase4:34893] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34893,1690146629259' 2023-07-23 21:10:31,452 DEBUG [RS:2;jenkins-hbase4:37385] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37385,1690146629650' 2023-07-23 21:10:31,451 DEBUG [RS:1;jenkins-hbase4:46093] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,453 DEBUG [RS:2;jenkins-hbase4:37385] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:31,453 DEBUG [RS:0;jenkins-hbase4:34893] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:31,453 DEBUG [RS:1;jenkins-hbase4:46093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46093,1690146629455' 2023-07-23 21:10:31,453 DEBUG [RS:1;jenkins-hbase4:46093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:31,454 DEBUG [RS:2;jenkins-hbase4:37385] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:31,454 DEBUG [RS:0;jenkins-hbase4:34893] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:31,454 DEBUG [RS:1;jenkins-hbase4:46093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:31,454 DEBUG [RS:2;jenkins-hbase4:37385] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:31,454 DEBUG [RS:2;jenkins-hbase4:37385] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:31,454 DEBUG [RS:2;jenkins-hbase4:37385] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:31,454 DEBUG [RS:2;jenkins-hbase4:37385] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37385,1690146629650' 2023-07-23 21:10:31,455 DEBUG [RS:2;jenkins-hbase4:37385] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:31,455 DEBUG [RS:0;jenkins-hbase4:34893] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:31,455 DEBUG [RS:0;jenkins-hbase4:34893] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:31,455 DEBUG [RS:1;jenkins-hbase4:46093] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:31,455 DEBUG [RS:0;jenkins-hbase4:34893] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:31,456 DEBUG [RS:2;jenkins-hbase4:37385] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:31,455 DEBUG [RS:1;jenkins-hbase4:46093] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:31,456 DEBUG [RS:0;jenkins-hbase4:34893] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34893,1690146629259' 2023-07-23 21:10:31,456 DEBUG [RS:0;jenkins-hbase4:34893] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:31,456 DEBUG [RS:1;jenkins-hbase4:46093] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,456 DEBUG [RS:1;jenkins-hbase4:46093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46093,1690146629455' 2023-07-23 21:10:31,456 DEBUG [RS:1;jenkins-hbase4:46093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:31,456 DEBUG [RS:2;jenkins-hbase4:37385] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:31,456 INFO [RS:2;jenkins-hbase4:37385] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:10:31,456 INFO [RS:2;jenkins-hbase4:37385] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:10:31,456 DEBUG [RS:0;jenkins-hbase4:34893] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:31,457 DEBUG [RS:1;jenkins-hbase4:46093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:31,457 DEBUG [RS:0;jenkins-hbase4:34893] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:31,457 INFO [RS:0;jenkins-hbase4:34893] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:10:31,459 INFO [RS:0;jenkins-hbase4:34893] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:10:31,460 DEBUG [RS:1;jenkins-hbase4:46093] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:31,460 INFO [RS:1;jenkins-hbase4:46093] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:10:31,460 INFO [RS:1;jenkins-hbase4:46093] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:10:31,521 DEBUG [jenkins-hbase4:46113] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 21:10:31,538 DEBUG [jenkins-hbase4:46113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:31,539 DEBUG [jenkins-hbase4:46113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:31,539 DEBUG [jenkins-hbase4:46113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:31,539 DEBUG [jenkins-hbase4:46113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:31,539 DEBUG [jenkins-hbase4:46113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:31,544 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46093,1690146629455, state=OPENING 2023-07-23 21:10:31,554 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 21:10:31,555 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:31,556 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:10:31,560 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:31,570 INFO [RS:1;jenkins-hbase4:46093] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46093%2C1690146629455, suffix=, logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,46093,1690146629455, archiveDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs, maxLogs=32 2023-07-23 21:10:31,570 INFO [RS:0;jenkins-hbase4:34893] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34893%2C1690146629259, suffix=, logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,34893,1690146629259, archiveDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs, maxLogs=32 2023-07-23 21:10:31,571 INFO [RS:2;jenkins-hbase4:37385] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37385%2C1690146629650, suffix=, logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,37385,1690146629650, archiveDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs, maxLogs=32 2023-07-23 21:10:31,660 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK] 2023-07-23 21:10:31,669 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK] 2023-07-23 21:10:31,671 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK] 2023-07-23 21:10:31,673 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK] 2023-07-23 21:10:31,673 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK] 2023-07-23 21:10:31,673 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK] 2023-07-23 21:10:31,673 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK] 2023-07-23 21:10:31,674 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK] 2023-07-23 21:10:31,674 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK] 2023-07-23 21:10:31,692 INFO [RS:1;jenkins-hbase4:46093] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,46093,1690146629455/jenkins-hbase4.apache.org%2C46093%2C1690146629455.1690146631576 2023-07-23 21:10:31,692 INFO [RS:0;jenkins-hbase4:34893] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,34893,1690146629259/jenkins-hbase4.apache.org%2C34893%2C1690146629259.1690146631576 2023-07-23 21:10:31,692 INFO [RS:2;jenkins-hbase4:37385] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,37385,1690146629650/jenkins-hbase4.apache.org%2C37385%2C1690146629650.1690146631576 2023-07-23 21:10:31,692 DEBUG [RS:1;jenkins-hbase4:46093] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK], DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK], DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK]] 2023-07-23 21:10:31,694 DEBUG [RS:2;jenkins-hbase4:37385] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK], DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK], DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK]] 2023-07-23 21:10:31,694 DEBUG [RS:0;jenkins-hbase4:34893] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK], DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK], DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK]] 2023-07-23 21:10:31,736 WARN [ReadOnlyZKClient-127.0.0.1:59206@0x41aea838] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-23 21:10:31,753 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,756 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:31,759 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49916, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:31,772 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46113,1690146627323] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:31,780 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49918, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:31,780 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46093] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:49918 deadline: 1690146691780, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:31,785 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 21:10:31,785 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:31,789 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46093%2C1690146629455.meta, suffix=.meta, logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,46093,1690146629455, archiveDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs, maxLogs=32 2023-07-23 21:10:31,810 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK] 2023-07-23 21:10:31,811 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK] 2023-07-23 21:10:31,812 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK] 2023-07-23 21:10:31,818 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,46093,1690146629455/jenkins-hbase4.apache.org%2C46093%2C1690146629455.meta.1690146631790.meta 2023-07-23 21:10:31,820 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK], DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK], DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK]] 2023-07-23 21:10:31,821 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:31,824 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:10:31,827 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 21:10:31,829 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 21:10:31,835 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 21:10:31,835 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:31,835 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 21:10:31,835 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 21:10:31,838 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:10:31,840 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/info 2023-07-23 21:10:31,840 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/info 2023-07-23 21:10:31,841 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:10:31,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:31,842 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:10:31,843 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:10:31,843 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:10:31,844 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:10:31,846 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:31,846 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:10:31,847 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/table 2023-07-23 21:10:31,847 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/table 2023-07-23 21:10:31,848 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:10:31,849 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:31,850 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740 2023-07-23 21:10:31,853 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740 2023-07-23 21:10:31,857 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:10:31,861 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:10:31,862 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9788928320, jitterRate=-0.08833500742912292}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:10:31,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:10:31,874 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690146631748 2023-07-23 21:10:31,896 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 21:10:31,897 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 21:10:31,898 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46093,1690146629455, state=OPEN 2023-07-23 21:10:31,903 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:10:31,903 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:10:31,910 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 21:10:31,910 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46093,1690146629455 in 343 msec 2023-07-23 21:10:31,921 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 21:10:31,921 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 565 msec 2023-07-23 21:10:31,934 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 943 msec 2023-07-23 21:10:31,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690146631934, completionTime=-1 2023-07-23 21:10:31,934 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 21:10:31,934 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 21:10:32,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 21:10:32,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690146692008 2023-07-23 21:10:32,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690146752008 2023-07-23 21:10:32,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 73 msec 2023-07-23 21:10:32,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46113,1690146627323-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:32,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46113,1690146627323-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:32,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46113,1690146627323-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:32,041 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46113, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:32,042 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:32,059 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 21:10:32,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 21:10:32,066 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:32,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 21:10:32,083 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:32,086 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:32,105 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:32,108 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd empty. 2023-07-23 21:10:32,109 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:32,109 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 21:10:32,166 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:32,169 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => f2fe29390f399eae0a4221056d0e01bd, NAME => 'hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:32,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:32,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing f2fe29390f399eae0a4221056d0e01bd, disabling compactions & flushes 2023-07-23 21:10:32,196 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:32,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:32,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. after waiting 0 ms 2023-07-23 21:10:32,196 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:32,196 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:32,197 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for f2fe29390f399eae0a4221056d0e01bd: 2023-07-23 21:10:32,201 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:32,223 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146632207"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146632207"}]},"ts":"1690146632207"} 2023-07-23 21:10:32,257 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:32,259 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:32,264 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146632259"}]},"ts":"1690146632259"} 2023-07-23 21:10:32,269 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 21:10:32,273 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:32,273 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:32,273 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:32,273 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:32,273 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:32,275 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f2fe29390f399eae0a4221056d0e01bd, ASSIGN}] 2023-07-23 21:10:32,278 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f2fe29390f399eae0a4221056d0e01bd, ASSIGN 2023-07-23 21:10:32,279 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f2fe29390f399eae0a4221056d0e01bd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:32,299 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46113,1690146627323] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:32,302 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46113,1690146627323] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 21:10:32,304 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:32,306 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:32,310 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35 2023-07-23 21:10:32,311 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35 empty. 2023-07-23 21:10:32,312 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35 2023-07-23 21:10:32,312 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 21:10:32,336 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:32,338 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 044211867ef276b1af97934dff65ac35, NAME => 'hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:32,360 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:32,360 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 044211867ef276b1af97934dff65ac35, disabling compactions & flushes 2023-07-23 21:10:32,360 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:32,360 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:32,360 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. after waiting 0 ms 2023-07-23 21:10:32,360 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:32,360 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:32,360 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 044211867ef276b1af97934dff65ac35: 2023-07-23 21:10:32,364 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:32,366 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146632366"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146632366"}]},"ts":"1690146632366"} 2023-07-23 21:10:32,369 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:32,372 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:32,372 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146632372"}]},"ts":"1690146632372"} 2023-07-23 21:10:32,378 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 21:10:32,382 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:32,382 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:32,382 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:32,382 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:32,382 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:32,382 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=044211867ef276b1af97934dff65ac35, ASSIGN}] 2023-07-23 21:10:32,385 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=044211867ef276b1af97934dff65ac35, ASSIGN 2023-07-23 21:10:32,387 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=044211867ef276b1af97934dff65ac35, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:32,388 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-23 21:10:32,389 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f2fe29390f399eae0a4221056d0e01bd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:32,389 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=044211867ef276b1af97934dff65ac35, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:32,390 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146632389"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146632389"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146632389"}]},"ts":"1690146632389"} 2023-07-23 21:10:32,390 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146632389"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146632389"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146632389"}]},"ts":"1690146632389"} 2023-07-23 21:10:32,394 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure f2fe29390f399eae0a4221056d0e01bd, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:32,396 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 044211867ef276b1af97934dff65ac35, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:32,554 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:32,554 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f2fe29390f399eae0a4221056d0e01bd, NAME => 'hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:32,555 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:32,556 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:32,556 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:32,556 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:32,559 INFO [StoreOpener-f2fe29390f399eae0a4221056d0e01bd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:32,561 DEBUG [StoreOpener-f2fe29390f399eae0a4221056d0e01bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd/info 2023-07-23 21:10:32,561 DEBUG [StoreOpener-f2fe29390f399eae0a4221056d0e01bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd/info 2023-07-23 21:10:32,562 INFO [StoreOpener-f2fe29390f399eae0a4221056d0e01bd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f2fe29390f399eae0a4221056d0e01bd columnFamilyName info 2023-07-23 21:10:32,563 INFO [StoreOpener-f2fe29390f399eae0a4221056d0e01bd-1] regionserver.HStore(310): Store=f2fe29390f399eae0a4221056d0e01bd/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:32,564 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:32,565 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:32,570 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:32,573 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:32,574 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f2fe29390f399eae0a4221056d0e01bd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12024262080, jitterRate=0.11984667181968689}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:32,574 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f2fe29390f399eae0a4221056d0e01bd: 2023-07-23 21:10:32,576 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd., pid=8, masterSystemTime=1690146632547 2023-07-23 21:10:32,580 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:32,580 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:32,580 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:32,580 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 044211867ef276b1af97934dff65ac35, NAME => 'hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:32,581 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:10:32,581 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. service=MultiRowMutationService 2023-07-23 21:10:32,582 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 21:10:32,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 044211867ef276b1af97934dff65ac35 2023-07-23 21:10:32,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:32,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 044211867ef276b1af97934dff65ac35 2023-07-23 21:10:32,582 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 044211867ef276b1af97934dff65ac35 2023-07-23 21:10:32,582 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=f2fe29390f399eae0a4221056d0e01bd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:32,583 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146632582"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146632582"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146632582"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146632582"}]},"ts":"1690146632582"} 2023-07-23 21:10:32,584 INFO [StoreOpener-044211867ef276b1af97934dff65ac35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 044211867ef276b1af97934dff65ac35 2023-07-23 21:10:32,586 DEBUG [StoreOpener-044211867ef276b1af97934dff65ac35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35/m 2023-07-23 21:10:32,587 DEBUG [StoreOpener-044211867ef276b1af97934dff65ac35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35/m 2023-07-23 21:10:32,587 INFO [StoreOpener-044211867ef276b1af97934dff65ac35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 044211867ef276b1af97934dff65ac35 columnFamilyName m 2023-07-23 21:10:32,588 INFO [StoreOpener-044211867ef276b1af97934dff65ac35-1] regionserver.HStore(310): Store=044211867ef276b1af97934dff65ac35/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:32,589 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35 2023-07-23 21:10:32,591 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-23 21:10:32,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35 2023-07-23 21:10:32,591 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure f2fe29390f399eae0a4221056d0e01bd, server=jenkins-hbase4.apache.org,46093,1690146629455 in 192 msec 2023-07-23 21:10:32,596 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 044211867ef276b1af97934dff65ac35 2023-07-23 21:10:32,597 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-23 21:10:32,597 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f2fe29390f399eae0a4221056d0e01bd, ASSIGN in 316 msec 2023-07-23 21:10:32,598 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:32,599 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146632598"}]},"ts":"1690146632598"} 2023-07-23 21:10:32,600 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:32,601 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 044211867ef276b1af97934dff65ac35; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@28014f6b, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:32,601 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 044211867ef276b1af97934dff65ac35: 2023-07-23 21:10:32,601 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 21:10:32,602 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35., pid=9, masterSystemTime=1690146632547 2023-07-23 21:10:32,604 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:32,605 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:32,605 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:32,606 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=044211867ef276b1af97934dff65ac35, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:32,606 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146632606"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146632606"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146632606"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146632606"}]},"ts":"1690146632606"} 2023-07-23 21:10:32,609 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 537 msec 2023-07-23 21:10:32,613 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-23 21:10:32,615 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 044211867ef276b1af97934dff65ac35, server=jenkins-hbase4.apache.org,46093,1690146629455 in 213 msec 2023-07-23 21:10:32,618 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-23 21:10:32,618 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=044211867ef276b1af97934dff65ac35, ASSIGN in 232 msec 2023-07-23 21:10:32,619 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:32,620 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146632619"}]},"ts":"1690146632619"} 2023-07-23 21:10:32,623 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 21:10:32,626 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:32,628 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 327 msec 2023-07-23 21:10:32,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 21:10:32,684 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:32,685 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:32,712 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 21:10:32,712 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 21:10:32,721 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 21:10:32,748 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:32,754 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 42 msec 2023-07-23 21:10:32,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 21:10:32,777 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:32,785 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-07-23 21:10:32,792 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:32,792 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:32,794 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 21:10:32,795 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:10:32,800 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 21:10:32,800 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 21:10:32,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.932sec 2023-07-23 21:10:32,803 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-23 21:10:32,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 21:10:32,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 21:10:32,805 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46113,1690146627323-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 21:10:32,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46113,1690146627323-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 21:10:32,813 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 21:10:32,865 DEBUG [Listener at localhost/39787] zookeeper.ReadOnlyZKClient(139): Connect 0x518a774a to 127.0.0.1:59206 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:32,871 DEBUG [Listener at localhost/39787] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3041a83e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:32,886 DEBUG [hconnection-0x2a5e2fc3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:32,897 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49932, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:32,907 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:32,909 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:32,917 DEBUG [Listener at localhost/39787] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 21:10:32,920 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56014, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 21:10:32,934 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 21:10:32,934 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:32,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 21:10:32,939 DEBUG [Listener at localhost/39787] zookeeper.ReadOnlyZKClient(139): Connect 0x09752cf4 to 127.0.0.1:59206 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:32,944 DEBUG [Listener at localhost/39787] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2db71259, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:32,944 INFO [Listener at localhost/39787] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59206 2023-07-23 21:10:32,947 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:32,947 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10194055df5000a connected 2023-07-23 21:10:32,977 INFO [Listener at localhost/39787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=422, OpenFileDescriptor=673, MaxFileDescriptor=60000, SystemLoadAverage=472, ProcessCount=175, AvailableMemoryMB=6211 2023-07-23 21:10:32,980 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-23 21:10:33,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:33,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:33,047 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 21:10:33,061 INFO [Listener at localhost/39787] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:33,061 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:33,062 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:33,062 INFO [Listener at localhost/39787] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:33,062 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:33,062 INFO [Listener at localhost/39787] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:33,062 INFO [Listener at localhost/39787] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:33,067 INFO [Listener at localhost/39787] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35321 2023-07-23 21:10:33,068 INFO [Listener at localhost/39787] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:33,071 DEBUG [Listener at localhost/39787] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:33,073 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:33,077 INFO [Listener at localhost/39787] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:33,087 INFO [Listener at localhost/39787] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35321 connecting to ZooKeeper ensemble=127.0.0.1:59206 2023-07-23 21:10:33,093 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:353210x0, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:33,095 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35321-0x10194055df5000b connected 2023-07-23 21:10:33,098 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(162): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:10:33,100 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(162): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 21:10:33,101 DEBUG [Listener at localhost/39787] zookeeper.ZKUtil(164): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:33,108 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35321 2023-07-23 21:10:33,109 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35321 2023-07-23 21:10:33,109 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35321 2023-07-23 21:10:33,114 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35321 2023-07-23 21:10:33,114 DEBUG [Listener at localhost/39787] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35321 2023-07-23 21:10:33,117 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:33,117 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:33,117 INFO [Listener at localhost/39787] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:33,118 INFO [Listener at localhost/39787] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:33,118 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:33,119 INFO [Listener at localhost/39787] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:33,119 INFO [Listener at localhost/39787] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:33,120 INFO [Listener at localhost/39787] http.HttpServer(1146): Jetty bound to port 35381 2023-07-23 21:10:33,120 INFO [Listener at localhost/39787] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:33,127 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:33,128 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@533f7cd2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:33,128 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:33,128 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2c431029{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:33,252 INFO [Listener at localhost/39787] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:33,253 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:33,254 INFO [Listener at localhost/39787] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:33,254 INFO [Listener at localhost/39787] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:33,255 INFO [Listener at localhost/39787] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:33,257 INFO [Listener at localhost/39787] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4f3a0b5c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/java.io.tmpdir/jetty-0_0_0_0-35381-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4180954044464962342/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:33,258 INFO [Listener at localhost/39787] server.AbstractConnector(333): Started ServerConnector@3feecd6d{HTTP/1.1, (http/1.1)}{0.0.0.0:35381} 2023-07-23 21:10:33,259 INFO [Listener at localhost/39787] server.Server(415): Started @11692ms 2023-07-23 21:10:33,261 INFO [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(951): ClusterId : c212240f-ef04-43d0-ba3e-08f5b0046088 2023-07-23 21:10:33,262 DEBUG [RS:3;jenkins-hbase4:35321] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:33,270 DEBUG [RS:3;jenkins-hbase4:35321] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:33,270 DEBUG [RS:3;jenkins-hbase4:35321] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:33,272 DEBUG [RS:3;jenkins-hbase4:35321] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:33,274 DEBUG [RS:3;jenkins-hbase4:35321] zookeeper.ReadOnlyZKClient(139): Connect 0x3850d5ef to 127.0.0.1:59206 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:33,279 DEBUG [RS:3;jenkins-hbase4:35321] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36973c95, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:33,279 DEBUG [RS:3;jenkins-hbase4:35321] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@32a8be9f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:33,288 DEBUG [RS:3;jenkins-hbase4:35321] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:35321 2023-07-23 21:10:33,288 INFO [RS:3;jenkins-hbase4:35321] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:33,288 INFO [RS:3;jenkins-hbase4:35321] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:33,288 DEBUG [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:33,289 INFO [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46113,1690146627323 with isa=jenkins-hbase4.apache.org/172.31.14.131:35321, startcode=1690146633061 2023-07-23 21:10:33,289 DEBUG [RS:3;jenkins-hbase4:35321] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:33,293 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60989, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:33,293 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46113] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:33,293 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:33,294 DEBUG [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a 2023-07-23 21:10:33,294 DEBUG [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:46635 2023-07-23 21:10:33,294 DEBUG [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42575 2023-07-23 21:10:33,299 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:33,299 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:33,299 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:33,299 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:33,299 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:33,300 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35321,1690146633061] 2023-07-23 21:10:33,300 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:10:33,300 DEBUG [RS:3;jenkins-hbase4:35321] zookeeper.ZKUtil(162): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:33,300 WARN [RS:3;jenkins-hbase4:35321] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:33,300 INFO [RS:3;jenkins-hbase4:35321] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:33,300 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:33,300 DEBUG [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1948): logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:33,300 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:33,300 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:33,308 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:33,308 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:33,308 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:33,308 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46113,1690146627323] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 21:10:33,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:33,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:33,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:33,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:33,310 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:33,312 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:33,313 DEBUG [RS:3;jenkins-hbase4:35321] zookeeper.ZKUtil(162): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:33,313 DEBUG [RS:3;jenkins-hbase4:35321] zookeeper.ZKUtil(162): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:33,314 DEBUG [RS:3;jenkins-hbase4:35321] zookeeper.ZKUtil(162): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:33,314 DEBUG [RS:3;jenkins-hbase4:35321] zookeeper.ZKUtil(162): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:33,315 DEBUG [RS:3;jenkins-hbase4:35321] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:33,316 INFO [RS:3;jenkins-hbase4:35321] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:33,317 INFO [RS:3;jenkins-hbase4:35321] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:33,319 INFO [RS:3;jenkins-hbase4:35321] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:33,319 INFO [RS:3;jenkins-hbase4:35321] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:33,326 INFO [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:33,329 INFO [RS:3;jenkins-hbase4:35321] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:33,329 DEBUG [RS:3;jenkins-hbase4:35321] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:33,334 INFO [RS:3;jenkins-hbase4:35321] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:33,334 INFO [RS:3;jenkins-hbase4:35321] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:33,335 INFO [RS:3;jenkins-hbase4:35321] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:33,350 INFO [RS:3;jenkins-hbase4:35321] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:33,350 INFO [RS:3;jenkins-hbase4:35321] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35321,1690146633061-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:33,363 INFO [RS:3;jenkins-hbase4:35321] regionserver.Replication(203): jenkins-hbase4.apache.org,35321,1690146633061 started 2023-07-23 21:10:33,363 INFO [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35321,1690146633061, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35321, sessionid=0x10194055df5000b 2023-07-23 21:10:33,363 DEBUG [RS:3;jenkins-hbase4:35321] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:33,363 DEBUG [RS:3;jenkins-hbase4:35321] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:33,363 DEBUG [RS:3;jenkins-hbase4:35321] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35321,1690146633061' 2023-07-23 21:10:33,363 DEBUG [RS:3;jenkins-hbase4:35321] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:33,364 DEBUG [RS:3;jenkins-hbase4:35321] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:33,364 DEBUG [RS:3;jenkins-hbase4:35321] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:33,364 DEBUG [RS:3;jenkins-hbase4:35321] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:33,364 DEBUG [RS:3;jenkins-hbase4:35321] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:33,364 DEBUG [RS:3;jenkins-hbase4:35321] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35321,1690146633061' 2023-07-23 21:10:33,364 DEBUG [RS:3;jenkins-hbase4:35321] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:33,365 DEBUG [RS:3;jenkins-hbase4:35321] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:33,365 DEBUG [RS:3;jenkins-hbase4:35321] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:33,365 INFO [RS:3;jenkins-hbase4:35321] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:10:33,365 INFO [RS:3;jenkins-hbase4:35321] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:10:33,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:33,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:33,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:33,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:33,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:33,380 DEBUG [hconnection-0x724df952-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:33,383 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49936, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:33,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:33,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:33,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:33,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:33,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:56014 deadline: 1690147833398, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:33,400 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:33,401 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:33,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:33,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:33,403 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:33,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:33,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:33,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:33,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:33,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:33,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:33,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:33,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:33,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:33,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:33,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:33,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:33,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:34893] to rsgroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:33,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:33,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:33,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:33,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:33,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:33,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061] are moved back to default 2023-07-23 21:10:33,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:33,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:33,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:33,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:33,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:33,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:33,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:33,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:33,466 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:33,468 INFO [RS:3;jenkins-hbase4:35321] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35321%2C1690146633061, suffix=, logDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,35321,1690146633061, archiveDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs, maxLogs=32 2023-07-23 21:10:33,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-23 21:10:33,472 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:33,473 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:33,473 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:33,474 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:33,480 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:33,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 21:10:33,491 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:33,491 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:33,492 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:33,492 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:33,492 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234 empty. 2023-07-23 21:10:33,492 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:33,492 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112 empty. 2023-07-23 21:10:33,493 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f empty. 2023-07-23 21:10:33,493 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad empty. 2023-07-23 21:10:33,494 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:33,497 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23 empty. 2023-07-23 21:10:33,497 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:33,498 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:33,498 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:33,498 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:33,499 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK] 2023-07-23 21:10:33,499 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 21:10:33,500 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK] 2023-07-23 21:10:33,506 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK] 2023-07-23 21:10:33,520 INFO [RS:3;jenkins-hbase4:35321] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/WALs/jenkins-hbase4.apache.org,35321,1690146633061/jenkins-hbase4.apache.org%2C35321%2C1690146633061.1690146633470 2023-07-23 21:10:33,522 DEBUG [RS:3;jenkins-hbase4:35321] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42041,DS-4042b899-9f8a-4d07-a83a-8d95f65f4040,DISK], DatanodeInfoWithStorage[127.0.0.1:34733,DS-b1d9e206-f83d-4afe-9987-b587ccde3809,DISK], DatanodeInfoWithStorage[127.0.0.1:35769,DS-cfb00db5-72d4-41e7-bb14-6daadbbc3a10,DISK]] 2023-07-23 21:10:33,553 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:33,555 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => de18c46fae8e1a623e19caa9ecc4a54f, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:33,555 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => b7929c05b3320857451e9f37ff6b6234, NAME => 'Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:33,555 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 2dae535cfa89620c3cfb25560d043112, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:33,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 21:10:33,614 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 2dae535cfa89620c3cfb25560d043112, disabling compactions & flushes 2023-07-23 21:10:33,615 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing de18c46fae8e1a623e19caa9ecc4a54f, disabling compactions & flushes 2023-07-23 21:10:33,615 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. after waiting 0 ms 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. after waiting 0 ms 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:33,615 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:33,615 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 2dae535cfa89620c3cfb25560d043112: 2023-07-23 21:10:33,616 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:33,616 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for de18c46fae8e1a623e19caa9ecc4a54f: 2023-07-23 21:10:33,617 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 53838dbf2294656bb16fe08be5da16ad, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:33,617 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 2fecdee3dfc79f394ed8516b711aef23, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:33,617 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:33,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing b7929c05b3320857451e9f37ff6b6234, disabling compactions & flushes 2023-07-23 21:10:33,618 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:33,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:33,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. after waiting 0 ms 2023-07-23 21:10:33,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:33,618 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:33,618 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for b7929c05b3320857451e9f37ff6b6234: 2023-07-23 21:10:33,642 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:33,643 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 2fecdee3dfc79f394ed8516b711aef23, disabling compactions & flushes 2023-07-23 21:10:33,643 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:33,643 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:33,643 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. after waiting 0 ms 2023-07-23 21:10:33,643 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:33,643 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:33,643 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 2fecdee3dfc79f394ed8516b711aef23: 2023-07-23 21:10:33,645 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:33,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 53838dbf2294656bb16fe08be5da16ad, disabling compactions & flushes 2023-07-23 21:10:33,646 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:33,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:33,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. after waiting 0 ms 2023-07-23 21:10:33,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:33,646 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:33,646 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 53838dbf2294656bb16fe08be5da16ad: 2023-07-23 21:10:33,651 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:33,652 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146633652"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146633652"}]},"ts":"1690146633652"} 2023-07-23 21:10:33,652 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146633652"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146633652"}]},"ts":"1690146633652"} 2023-07-23 21:10:33,652 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146633652"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146633652"}]},"ts":"1690146633652"} 2023-07-23 21:10:33,653 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146633652"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146633652"}]},"ts":"1690146633652"} 2023-07-23 21:10:33,653 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146633652"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146633652"}]},"ts":"1690146633652"} 2023-07-23 21:10:33,714 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-23 21:10:33,716 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:33,716 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146633716"}]},"ts":"1690146633716"} 2023-07-23 21:10:33,724 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-23 21:10:33,734 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:33,734 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:33,734 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:33,735 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:33,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, ASSIGN}] 2023-07-23 21:10:33,739 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, ASSIGN 2023-07-23 21:10:33,739 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, ASSIGN 2023-07-23 21:10:33,740 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, ASSIGN 2023-07-23 21:10:33,740 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, ASSIGN 2023-07-23 21:10:33,742 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37385,1690146629650; forceNewPlan=false, retain=false 2023-07-23 21:10:33,742 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:33,742 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:33,742 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37385,1690146629650; forceNewPlan=false, retain=false 2023-07-23 21:10:33,744 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, ASSIGN 2023-07-23 21:10:33,745 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:33,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 21:10:33,893 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 21:10:33,895 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=53838dbf2294656bb16fe08be5da16ad, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:33,895 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=de18c46fae8e1a623e19caa9ecc4a54f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:33,895 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=b7929c05b3320857451e9f37ff6b6234, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:33,895 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=2fecdee3dfc79f394ed8516b711aef23, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:33,895 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=2dae535cfa89620c3cfb25560d043112, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:33,896 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146633895"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146633895"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146633895"}]},"ts":"1690146633895"} 2023-07-23 21:10:33,896 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146633895"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146633895"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146633895"}]},"ts":"1690146633895"} 2023-07-23 21:10:33,896 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146633895"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146633895"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146633895"}]},"ts":"1690146633895"} 2023-07-23 21:10:33,896 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146633895"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146633895"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146633895"}]},"ts":"1690146633895"} 2023-07-23 21:10:33,896 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146633895"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146633895"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146633895"}]},"ts":"1690146633895"} 2023-07-23 21:10:33,899 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=14, state=RUNNABLE; OpenRegionProcedure de18c46fae8e1a623e19caa9ecc4a54f, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:33,901 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=13, state=RUNNABLE; OpenRegionProcedure b7929c05b3320857451e9f37ff6b6234, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:33,903 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=17, state=RUNNABLE; OpenRegionProcedure 53838dbf2294656bb16fe08be5da16ad, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:33,905 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=16, state=RUNNABLE; OpenRegionProcedure 2fecdee3dfc79f394ed8516b711aef23, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:33,907 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=15, state=RUNNABLE; OpenRegionProcedure 2dae535cfa89620c3cfb25560d043112, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:34,052 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:34,052 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:34,055 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45324, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:34,062 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:34,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2fecdee3dfc79f394ed8516b711aef23, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 21:10:34,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:34,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:34,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:34,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:34,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:34,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7929c05b3320857451e9f37ff6b6234, NAME => 'Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 21:10:34,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:34,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:34,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:34,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:34,071 INFO [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:34,073 INFO [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:34,074 DEBUG [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/f 2023-07-23 21:10:34,074 DEBUG [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/f 2023-07-23 21:10:34,074 INFO [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2fecdee3dfc79f394ed8516b711aef23 columnFamilyName f 2023-07-23 21:10:34,075 INFO [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] regionserver.HStore(310): Store=2fecdee3dfc79f394ed8516b711aef23/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:34,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:34,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:34,079 DEBUG [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/f 2023-07-23 21:10:34,079 DEBUG [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/f 2023-07-23 21:10:34,080 INFO [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7929c05b3320857451e9f37ff6b6234 columnFamilyName f 2023-07-23 21:10:34,081 INFO [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] regionserver.HStore(310): Store=b7929c05b3320857451e9f37ff6b6234/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:34,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:34,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:34,089 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:34,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:34,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:34,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:34,102 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2fecdee3dfc79f394ed8516b711aef23; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10696684640, jitterRate=-0.0037936121225357056}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:34,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2fecdee3dfc79f394ed8516b711aef23: 2023-07-23 21:10:34,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7929c05b3320857451e9f37ff6b6234; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11839475840, jitterRate=0.10263711214065552}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:34,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 21:10:34,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7929c05b3320857451e9f37ff6b6234: 2023-07-23 21:10:34,107 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234., pid=19, masterSystemTime=1690146634056 2023-07-23 21:10:34,108 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23., pid=21, masterSystemTime=1690146634052 2023-07-23 21:10:34,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:34,112 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:34,112 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:34,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2dae535cfa89620c3cfb25560d043112, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 21:10:34,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:34,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:34,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:34,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:34,116 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=b7929c05b3320857451e9f37ff6b6234, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:34,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:34,116 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146634116"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146634116"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146634116"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146634116"}]},"ts":"1690146634116"} 2023-07-23 21:10:34,116 INFO [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:34,118 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:34,119 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:34,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de18c46fae8e1a623e19caa9ecc4a54f, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 21:10:34,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:34,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:34,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:34,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:34,121 DEBUG [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/f 2023-07-23 21:10:34,121 DEBUG [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/f 2023-07-23 21:10:34,121 INFO [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:34,121 INFO [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2dae535cfa89620c3cfb25560d043112 columnFamilyName f 2023-07-23 21:10:34,122 INFO [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] regionserver.HStore(310): Store=2dae535cfa89620c3cfb25560d043112/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:34,123 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=2fecdee3dfc79f394ed8516b711aef23, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:34,124 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146634123"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146634123"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146634123"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146634123"}]},"ts":"1690146634123"} 2023-07-23 21:10:34,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:34,126 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:34,127 DEBUG [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/f 2023-07-23 21:10:34,133 DEBUG [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/f 2023-07-23 21:10:34,134 INFO [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de18c46fae8e1a623e19caa9ecc4a54f columnFamilyName f 2023-07-23 21:10:34,135 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=13 2023-07-23 21:10:34,135 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=13, state=SUCCESS; OpenRegionProcedure b7929c05b3320857451e9f37ff6b6234, server=jenkins-hbase4.apache.org,46093,1690146629455 in 224 msec 2023-07-23 21:10:34,135 INFO [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] regionserver.HStore(310): Store=de18c46fae8e1a623e19caa9ecc4a54f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:34,139 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=16 2023-07-23 21:10:34,140 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=16, state=SUCCESS; OpenRegionProcedure 2fecdee3dfc79f394ed8516b711aef23, server=jenkins-hbase4.apache.org,37385,1690146629650 in 225 msec 2023-07-23 21:10:34,141 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, ASSIGN in 400 msec 2023-07-23 21:10:34,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:34,144 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, ASSIGN in 405 msec 2023-07-23 21:10:34,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:34,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:34,151 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:34,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:34,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:34,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened de18c46fae8e1a623e19caa9ecc4a54f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11876836480, jitterRate=0.10611659288406372}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:34,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for de18c46fae8e1a623e19caa9ecc4a54f: 2023-07-23 21:10:34,162 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2dae535cfa89620c3cfb25560d043112; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9666581760, jitterRate=-0.09972941875457764}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:34,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2dae535cfa89620c3cfb25560d043112: 2023-07-23 21:10:34,163 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f., pid=18, masterSystemTime=1690146634052 2023-07-23 21:10:34,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112., pid=22, masterSystemTime=1690146634056 2023-07-23 21:10:34,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:34,166 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:34,167 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=de18c46fae8e1a623e19caa9ecc4a54f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:34,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:34,167 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146634167"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146634167"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146634167"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146634167"}]},"ts":"1690146634167"} 2023-07-23 21:10:34,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:34,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:34,168 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=2dae535cfa89620c3cfb25560d043112, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:34,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 53838dbf2294656bb16fe08be5da16ad, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 21:10:34,169 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146634168"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146634168"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146634168"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146634168"}]},"ts":"1690146634168"} 2023-07-23 21:10:34,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:34,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:34,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:34,170 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:34,172 INFO [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:34,175 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=14 2023-07-23 21:10:34,175 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=14, state=SUCCESS; OpenRegionProcedure de18c46fae8e1a623e19caa9ecc4a54f, server=jenkins-hbase4.apache.org,37385,1690146629650 in 272 msec 2023-07-23 21:10:34,176 DEBUG [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/f 2023-07-23 21:10:34,176 DEBUG [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/f 2023-07-23 21:10:34,177 INFO [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 53838dbf2294656bb16fe08be5da16ad columnFamilyName f 2023-07-23 21:10:34,177 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=15 2023-07-23 21:10:34,177 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=15, state=SUCCESS; OpenRegionProcedure 2dae535cfa89620c3cfb25560d043112, server=jenkins-hbase4.apache.org,46093,1690146629455 in 267 msec 2023-07-23 21:10:34,177 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, ASSIGN in 440 msec 2023-07-23 21:10:34,178 INFO [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] regionserver.HStore(310): Store=53838dbf2294656bb16fe08be5da16ad/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:34,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:34,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:34,181 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, ASSIGN in 442 msec 2023-07-23 21:10:34,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:34,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:34,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 53838dbf2294656bb16fe08be5da16ad; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9565605760, jitterRate=-0.10913354158401489}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:34,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 53838dbf2294656bb16fe08be5da16ad: 2023-07-23 21:10:34,199 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad., pid=20, masterSystemTime=1690146634056 2023-07-23 21:10:34,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:34,201 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:34,202 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=53838dbf2294656bb16fe08be5da16ad, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:34,203 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146634202"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146634202"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146634202"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146634202"}]},"ts":"1690146634202"} 2023-07-23 21:10:34,210 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=17 2023-07-23 21:10:34,210 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=17, state=SUCCESS; OpenRegionProcedure 53838dbf2294656bb16fe08be5da16ad, server=jenkins-hbase4.apache.org,46093,1690146629455 in 304 msec 2023-07-23 21:10:34,213 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-23 21:10:34,215 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, ASSIGN in 475 msec 2023-07-23 21:10:34,215 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:34,216 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146634216"}]},"ts":"1690146634216"} 2023-07-23 21:10:34,217 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-23 21:10:34,220 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:34,223 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 759 msec 2023-07-23 21:10:34,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 21:10:34,605 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-23 21:10:34,606 DEBUG [Listener at localhost/39787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-23 21:10:34,607 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:34,615 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-23 21:10:34,615 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:34,616 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-23 21:10:34,616 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:34,622 DEBUG [Listener at localhost/39787] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:34,629 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44684, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:34,632 DEBUG [Listener at localhost/39787] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:34,635 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34392, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:34,636 DEBUG [Listener at localhost/39787] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:34,646 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45328, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:34,648 DEBUG [Listener at localhost/39787] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:34,652 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49946, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:34,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:34,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:34,668 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:34,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:34,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:34,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:34,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:34,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:34,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:34,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region b7929c05b3320857451e9f37ff6b6234 to RSGroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:34,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:34,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:34,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:34,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:34,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:34,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, REOPEN/MOVE 2023-07-23 21:10:34,703 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, REOPEN/MOVE 2023-07-23 21:10:34,705 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=b7929c05b3320857451e9f37ff6b6234, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:34,705 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146634705"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146634705"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146634705"}]},"ts":"1690146634705"} 2023-07-23 21:10:34,708 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=23, state=RUNNABLE; CloseRegionProcedure b7929c05b3320857451e9f37ff6b6234, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:34,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region de18c46fae8e1a623e19caa9ecc4a54f to RSGroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:34,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:34,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:34,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:34,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:34,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:34,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, REOPEN/MOVE 2023-07-23 21:10:34,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region 2dae535cfa89620c3cfb25560d043112 to RSGroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:34,714 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, REOPEN/MOVE 2023-07-23 21:10:34,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:34,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:34,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:34,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:34,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:34,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, REOPEN/MOVE 2023-07-23 21:10:34,717 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=de18c46fae8e1a623e19caa9ecc4a54f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:34,724 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146634717"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146634717"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146634717"}]},"ts":"1690146634717"} 2023-07-23 21:10:34,727 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, REOPEN/MOVE 2023-07-23 21:10:34,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region 2fecdee3dfc79f394ed8516b711aef23 to RSGroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:34,729 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=25, state=RUNNABLE; CloseRegionProcedure de18c46fae8e1a623e19caa9ecc4a54f, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:34,729 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=2dae535cfa89620c3cfb25560d043112, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:34,729 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146634729"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146634729"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146634729"}]},"ts":"1690146634729"} 2023-07-23 21:10:34,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:34,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:34,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:34,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:34,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:34,734 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure 2dae535cfa89620c3cfb25560d043112, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:34,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, REOPEN/MOVE 2023-07-23 21:10:34,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region 53838dbf2294656bb16fe08be5da16ad to RSGroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:34,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:34,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:34,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:34,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:34,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:34,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, REOPEN/MOVE 2023-07-23 21:10:34,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_49429789, current retry=0 2023-07-23 21:10:34,740 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, REOPEN/MOVE 2023-07-23 21:10:34,740 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, REOPEN/MOVE 2023-07-23 21:10:34,742 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=2fecdee3dfc79f394ed8516b711aef23, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:34,742 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=53838dbf2294656bb16fe08be5da16ad, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:34,742 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146634742"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146634742"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146634742"}]},"ts":"1690146634742"} 2023-07-23 21:10:34,742 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146634742"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146634742"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146634742"}]},"ts":"1690146634742"} 2023-07-23 21:10:34,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=28, state=RUNNABLE; CloseRegionProcedure 2fecdee3dfc79f394ed8516b711aef23, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:34,746 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=30, state=RUNNABLE; CloseRegionProcedure 53838dbf2294656bb16fe08be5da16ad, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:34,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:34,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2dae535cfa89620c3cfb25560d043112, disabling compactions & flushes 2023-07-23 21:10:34,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:34,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:34,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. after waiting 0 ms 2023-07-23 21:10:34,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:34,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:34,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing de18c46fae8e1a623e19caa9ecc4a54f, disabling compactions & flushes 2023-07-23 21:10:34,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:34,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:34,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. after waiting 0 ms 2023-07-23 21:10:34,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:34,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:34,899 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:34,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for de18c46fae8e1a623e19caa9ecc4a54f: 2023-07-23 21:10:34,899 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding de18c46fae8e1a623e19caa9ecc4a54f move to jenkins-hbase4.apache.org,34893,1690146629259 record at close sequenceid=2 2023-07-23 21:10:34,899 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:34,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:34,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2dae535cfa89620c3cfb25560d043112: 2023-07-23 21:10:34,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2dae535cfa89620c3cfb25560d043112 move to jenkins-hbase4.apache.org,35321,1690146633061 record at close sequenceid=2 2023-07-23 21:10:34,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:34,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:34,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2fecdee3dfc79f394ed8516b711aef23, disabling compactions & flushes 2023-07-23 21:10:34,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:34,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:34,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. after waiting 0 ms 2023-07-23 21:10:34,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:34,909 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=de18c46fae8e1a623e19caa9ecc4a54f, regionState=CLOSED 2023-07-23 21:10:34,909 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146634908"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146634908"}]},"ts":"1690146634908"} 2023-07-23 21:10:34,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:34,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:34,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7929c05b3320857451e9f37ff6b6234, disabling compactions & flushes 2023-07-23 21:10:34,913 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:34,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:34,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. after waiting 0 ms 2023-07-23 21:10:34,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:34,922 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=2dae535cfa89620c3cfb25560d043112, regionState=CLOSED 2023-07-23 21:10:34,922 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146634922"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146634922"}]},"ts":"1690146634922"} 2023-07-23 21:10:34,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:34,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:34,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2fecdee3dfc79f394ed8516b711aef23: 2023-07-23 21:10:34,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2fecdee3dfc79f394ed8516b711aef23 move to jenkins-hbase4.apache.org,34893,1690146629259 record at close sequenceid=2 2023-07-23 21:10:34,929 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=25 2023-07-23 21:10:34,929 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=25, state=SUCCESS; CloseRegionProcedure de18c46fae8e1a623e19caa9ecc4a54f, server=jenkins-hbase4.apache.org,37385,1690146629650 in 185 msec 2023-07-23 21:10:34,931 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34893,1690146629259; forceNewPlan=false, retain=false 2023-07-23 21:10:34,932 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=2fecdee3dfc79f394ed8516b711aef23, regionState=CLOSED 2023-07-23 21:10:34,932 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146634932"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146634932"}]},"ts":"1690146634932"} 2023-07-23 21:10:34,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-23 21:10:34,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure 2dae535cfa89620c3cfb25560d043112, server=jenkins-hbase4.apache.org,46093,1690146629455 in 196 msec 2023-07-23 21:10:34,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:34,935 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35321,1690146633061; forceNewPlan=false, retain=false 2023-07-23 21:10:34,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:34,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:34,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7929c05b3320857451e9f37ff6b6234: 2023-07-23 21:10:34,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b7929c05b3320857451e9f37ff6b6234 move to jenkins-hbase4.apache.org,34893,1690146629259 record at close sequenceid=2 2023-07-23 21:10:34,940 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=28 2023-07-23 21:10:34,940 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=28, state=SUCCESS; CloseRegionProcedure 2fecdee3dfc79f394ed8516b711aef23, server=jenkins-hbase4.apache.org,37385,1690146629650 in 189 msec 2023-07-23 21:10:34,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:34,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:34,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 53838dbf2294656bb16fe08be5da16ad, disabling compactions & flushes 2023-07-23 21:10:34,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:34,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:34,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. after waiting 0 ms 2023-07-23 21:10:34,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:34,946 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34893,1690146629259; forceNewPlan=false, retain=false 2023-07-23 21:10:34,946 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=b7929c05b3320857451e9f37ff6b6234, regionState=CLOSED 2023-07-23 21:10:34,947 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146634946"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146634946"}]},"ts":"1690146634946"} 2023-07-23 21:10:34,953 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=23 2023-07-23 21:10:34,953 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=23, state=SUCCESS; CloseRegionProcedure b7929c05b3320857451e9f37ff6b6234, server=jenkins-hbase4.apache.org,46093,1690146629455 in 241 msec 2023-07-23 21:10:34,954 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34893,1690146629259; forceNewPlan=false, retain=false 2023-07-23 21:10:34,963 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:34,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:34,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 53838dbf2294656bb16fe08be5da16ad: 2023-07-23 21:10:34,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 53838dbf2294656bb16fe08be5da16ad move to jenkins-hbase4.apache.org,35321,1690146633061 record at close sequenceid=2 2023-07-23 21:10:34,980 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=53838dbf2294656bb16fe08be5da16ad, regionState=CLOSED 2023-07-23 21:10:34,980 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146634980"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146634980"}]},"ts":"1690146634980"} 2023-07-23 21:10:34,981 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:34,986 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=30 2023-07-23 21:10:34,986 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=30, state=SUCCESS; CloseRegionProcedure 53838dbf2294656bb16fe08be5da16ad, server=jenkins-hbase4.apache.org,46093,1690146629455 in 237 msec 2023-07-23 21:10:34,987 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35321,1690146633061; forceNewPlan=false, retain=false 2023-07-23 21:10:35,082 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 21:10:35,083 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=53838dbf2294656bb16fe08be5da16ad, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:35,083 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146635082"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635082"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635082"}]},"ts":"1690146635082"} 2023-07-23 21:10:35,084 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=2fecdee3dfc79f394ed8516b711aef23, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,084 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635084"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635084"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635084"}]},"ts":"1690146635084"} 2023-07-23 21:10:35,084 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=de18c46fae8e1a623e19caa9ecc4a54f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,084 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635084"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635084"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635084"}]},"ts":"1690146635084"} 2023-07-23 21:10:35,085 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=b7929c05b3320857451e9f37ff6b6234, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,085 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146635085"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635085"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635085"}]},"ts":"1690146635085"} 2023-07-23 21:10:35,086 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=2dae535cfa89620c3cfb25560d043112, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:35,086 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635083"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635083"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635083"}]},"ts":"1690146635083"} 2023-07-23 21:10:35,087 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=30, state=RUNNABLE; OpenRegionProcedure 53838dbf2294656bb16fe08be5da16ad, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:35,090 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=28, state=RUNNABLE; OpenRegionProcedure 2fecdee3dfc79f394ed8516b711aef23, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:35,092 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=25, state=RUNNABLE; OpenRegionProcedure de18c46fae8e1a623e19caa9ecc4a54f, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:35,095 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=23, state=RUNNABLE; OpenRegionProcedure b7929c05b3320857451e9f37ff6b6234, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:35,096 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=26, state=RUNNABLE; OpenRegionProcedure 2dae535cfa89620c3cfb25560d043112, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:35,242 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:35,242 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:35,244 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34402, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:35,247 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,247 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:35,253 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:35,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 53838dbf2294656bb16fe08be5da16ad, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 21:10:35,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:35,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:35,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:35,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:35,254 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44696, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:35,264 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:35,264 INFO [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:35,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7929c05b3320857451e9f37ff6b6234, NAME => 'Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 21:10:35,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:35,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:35,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:35,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:35,266 DEBUG [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/f 2023-07-23 21:10:35,266 DEBUG [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/f 2023-07-23 21:10:35,267 INFO [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 53838dbf2294656bb16fe08be5da16ad columnFamilyName f 2023-07-23 21:10:35,268 INFO [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:35,270 DEBUG [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/f 2023-07-23 21:10:35,270 DEBUG [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/f 2023-07-23 21:10:35,270 INFO [StoreOpener-53838dbf2294656bb16fe08be5da16ad-1] regionserver.HStore(310): Store=53838dbf2294656bb16fe08be5da16ad/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:35,273 INFO [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7929c05b3320857451e9f37ff6b6234 columnFamilyName f 2023-07-23 21:10:35,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:35,275 INFO [StoreOpener-b7929c05b3320857451e9f37ff6b6234-1] regionserver.HStore(310): Store=b7929c05b3320857451e9f37ff6b6234/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:35,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:35,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:35,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:35,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:35,286 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 53838dbf2294656bb16fe08be5da16ad; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11292327520, jitterRate=0.051679953932762146}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:35,287 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 53838dbf2294656bb16fe08be5da16ad: 2023-07-23 21:10:35,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad., pid=33, masterSystemTime=1690146635241 2023-07-23 21:10:35,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:35,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7929c05b3320857451e9f37ff6b6234; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10647475360, jitterRate=-0.008376583456993103}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:35,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7929c05b3320857451e9f37ff6b6234: 2023-07-23 21:10:35,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:35,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:35,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:35,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2dae535cfa89620c3cfb25560d043112, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 21:10:35,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:35,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:35,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:35,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:35,295 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=53838dbf2294656bb16fe08be5da16ad, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:35,296 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234., pid=36, masterSystemTime=1690146635247 2023-07-23 21:10:35,296 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146635295"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146635295"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146635295"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146635295"}]},"ts":"1690146635295"} 2023-07-23 21:10:35,301 INFO [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:35,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:35,302 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:35,302 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:35,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de18c46fae8e1a623e19caa9ecc4a54f, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 21:10:35,304 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=b7929c05b3320857451e9f37ff6b6234, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:35,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:35,304 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146635303"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146635303"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146635303"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146635303"}]},"ts":"1690146635303"} 2023-07-23 21:10:35,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:35,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:35,304 DEBUG [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/f 2023-07-23 21:10:35,306 DEBUG [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/f 2023-07-23 21:10:35,306 INFO [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2dae535cfa89620c3cfb25560d043112 columnFamilyName f 2023-07-23 21:10:35,307 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=30 2023-07-23 21:10:35,307 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=30, state=SUCCESS; OpenRegionProcedure 53838dbf2294656bb16fe08be5da16ad, server=jenkins-hbase4.apache.org,35321,1690146633061 in 214 msec 2023-07-23 21:10:35,310 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, REOPEN/MOVE in 572 msec 2023-07-23 21:10:35,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=23 2023-07-23 21:10:35,310 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=23, state=SUCCESS; OpenRegionProcedure b7929c05b3320857451e9f37ff6b6234, server=jenkins-hbase4.apache.org,34893,1690146629259 in 212 msec 2023-07-23 21:10:35,311 INFO [StoreOpener-2dae535cfa89620c3cfb25560d043112-1] regionserver.HStore(310): Store=2dae535cfa89620c3cfb25560d043112/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:35,312 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:35,313 INFO [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:35,315 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:35,315 DEBUG [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/f 2023-07-23 21:10:35,317 DEBUG [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/f 2023-07-23 21:10:35,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:35,320 INFO [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de18c46fae8e1a623e19caa9ecc4a54f columnFamilyName f 2023-07-23 21:10:35,321 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2dae535cfa89620c3cfb25560d043112; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11053276960, jitterRate=0.029416635632514954}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:35,321 INFO [StoreOpener-de18c46fae8e1a623e19caa9ecc4a54f-1] regionserver.HStore(310): Store=de18c46fae8e1a623e19caa9ecc4a54f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:35,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2dae535cfa89620c3cfb25560d043112: 2023-07-23 21:10:35,323 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112., pid=37, masterSystemTime=1690146635241 2023-07-23 21:10:35,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:35,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:35,327 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, REOPEN/MOVE in 614 msec 2023-07-23 21:10:35,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:35,328 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:35,330 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=2dae535cfa89620c3cfb25560d043112, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:35,330 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635330"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146635330"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146635330"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146635330"}]},"ts":"1690146635330"} 2023-07-23 21:10:35,341 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:35,343 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened de18c46fae8e1a623e19caa9ecc4a54f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9992051040, jitterRate=-0.06941772997379303}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:35,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for de18c46fae8e1a623e19caa9ecc4a54f: 2023-07-23 21:10:35,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f., pid=35, masterSystemTime=1690146635247 2023-07-23 21:10:35,345 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=26 2023-07-23 21:10:35,345 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=26, state=SUCCESS; OpenRegionProcedure 2dae535cfa89620c3cfb25560d043112, server=jenkins-hbase4.apache.org,35321,1690146633061 in 243 msec 2023-07-23 21:10:35,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:35,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:35,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:35,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2fecdee3dfc79f394ed8516b711aef23, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 21:10:35,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:35,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:35,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:35,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:35,348 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, REOPEN/MOVE in 630 msec 2023-07-23 21:10:35,348 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=de18c46fae8e1a623e19caa9ecc4a54f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,349 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635348"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146635348"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146635348"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146635348"}]},"ts":"1690146635348"} 2023-07-23 21:10:35,357 INFO [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:35,358 DEBUG [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/f 2023-07-23 21:10:35,359 DEBUG [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/f 2023-07-23 21:10:35,359 INFO [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2fecdee3dfc79f394ed8516b711aef23 columnFamilyName f 2023-07-23 21:10:35,361 INFO [StoreOpener-2fecdee3dfc79f394ed8516b711aef23-1] regionserver.HStore(310): Store=2fecdee3dfc79f394ed8516b711aef23/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:35,361 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=25 2023-07-23 21:10:35,361 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=25, state=SUCCESS; OpenRegionProcedure de18c46fae8e1a623e19caa9ecc4a54f, server=jenkins-hbase4.apache.org,34893,1690146629259 in 259 msec 2023-07-23 21:10:35,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:35,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, REOPEN/MOVE in 649 msec 2023-07-23 21:10:35,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:35,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:35,374 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2fecdee3dfc79f394ed8516b711aef23; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10397590240, jitterRate=-0.03164894878864288}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:35,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2fecdee3dfc79f394ed8516b711aef23: 2023-07-23 21:10:35,376 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23., pid=34, masterSystemTime=1690146635247 2023-07-23 21:10:35,379 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:35,379 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:35,379 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=2fecdee3dfc79f394ed8516b711aef23, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,380 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635379"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146635379"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146635379"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146635379"}]},"ts":"1690146635379"} 2023-07-23 21:10:35,385 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=28 2023-07-23 21:10:35,385 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=28, state=SUCCESS; OpenRegionProcedure 2fecdee3dfc79f394ed8516b711aef23, server=jenkins-hbase4.apache.org,34893,1690146629259 in 292 msec 2023-07-23 21:10:35,387 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, REOPEN/MOVE in 653 msec 2023-07-23 21:10:35,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-23 21:10:35,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_49429789. 2023-07-23 21:10:35,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:35,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:35,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:35,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:35,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:35,750 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:35,757 INFO [Listener at localhost/39787] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:35,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:35,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:35,773 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146635773"}]},"ts":"1690146635773"} 2023-07-23 21:10:35,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-23 21:10:35,775 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-23 21:10:35,777 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-23 21:10:35,779 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, UNASSIGN}] 2023-07-23 21:10:35,781 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, UNASSIGN 2023-07-23 21:10:35,781 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, UNASSIGN 2023-07-23 21:10:35,781 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, UNASSIGN 2023-07-23 21:10:35,782 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, UNASSIGN 2023-07-23 21:10:35,782 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, UNASSIGN 2023-07-23 21:10:35,783 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=2fecdee3dfc79f394ed8516b711aef23, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,783 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635783"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635783"}]},"ts":"1690146635783"} 2023-07-23 21:10:35,783 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=53838dbf2294656bb16fe08be5da16ad, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:35,784 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=2dae535cfa89620c3cfb25560d043112, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:35,784 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=de18c46fae8e1a623e19caa9ecc4a54f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,784 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635784"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635784"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635784"}]},"ts":"1690146635784"} 2023-07-23 21:10:35,784 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=b7929c05b3320857451e9f37ff6b6234, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:35,784 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146635784"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635784"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635784"}]},"ts":"1690146635784"} 2023-07-23 21:10:35,784 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146635783"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635783"}]},"ts":"1690146635783"} 2023-07-23 21:10:35,784 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635784"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146635784"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146635784"}]},"ts":"1690146635784"} 2023-07-23 21:10:35,785 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=42, state=RUNNABLE; CloseRegionProcedure 2fecdee3dfc79f394ed8516b711aef23, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:35,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=41, state=RUNNABLE; CloseRegionProcedure 2dae535cfa89620c3cfb25560d043112, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:35,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=39, state=RUNNABLE; CloseRegionProcedure b7929c05b3320857451e9f37ff6b6234, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:35,789 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=43, state=RUNNABLE; CloseRegionProcedure 53838dbf2294656bb16fe08be5da16ad, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:35,791 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=40, state=RUNNABLE; CloseRegionProcedure de18c46fae8e1a623e19caa9ecc4a54f, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:35,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-23 21:10:35,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:35,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:35,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 53838dbf2294656bb16fe08be5da16ad, disabling compactions & flushes 2023-07-23 21:10:35,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2fecdee3dfc79f394ed8516b711aef23, disabling compactions & flushes 2023-07-23 21:10:35,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:35,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:35,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:35,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:35,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. after waiting 0 ms 2023-07-23 21:10:35,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. after waiting 0 ms 2023-07-23 21:10:35,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:35,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:35,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:35,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad. 2023-07-23 21:10:35,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 53838dbf2294656bb16fe08be5da16ad: 2023-07-23 21:10:35,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:35,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:35,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:35,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2dae535cfa89620c3cfb25560d043112, disabling compactions & flushes 2023-07-23 21:10:35,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:35,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:35,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. after waiting 0 ms 2023-07-23 21:10:35,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:35,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23. 2023-07-23 21:10:35,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2fecdee3dfc79f394ed8516b711aef23: 2023-07-23 21:10:35,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:35,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112. 2023-07-23 21:10:35,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2dae535cfa89620c3cfb25560d043112: 2023-07-23 21:10:35,971 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=53838dbf2294656bb16fe08be5da16ad, regionState=CLOSED 2023-07-23 21:10:35,971 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146635971"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146635971"}]},"ts":"1690146635971"} 2023-07-23 21:10:35,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:35,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:35,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing de18c46fae8e1a623e19caa9ecc4a54f, disabling compactions & flushes 2023-07-23 21:10:35,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:35,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:35,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. after waiting 0 ms 2023-07-23 21:10:35,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:35,978 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=2fecdee3dfc79f394ed8516b711aef23, regionState=CLOSED 2023-07-23 21:10:35,979 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635978"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146635978"}]},"ts":"1690146635978"} 2023-07-23 21:10:35,981 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:35,981 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=2dae535cfa89620c3cfb25560d043112, regionState=CLOSED 2023-07-23 21:10:35,982 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635981"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146635981"}]},"ts":"1690146635981"} 2023-07-23 21:10:35,984 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=43 2023-07-23 21:10:35,984 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=43, state=SUCCESS; CloseRegionProcedure 53838dbf2294656bb16fe08be5da16ad, server=jenkins-hbase4.apache.org,35321,1690146633061 in 189 msec 2023-07-23 21:10:35,990 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=53838dbf2294656bb16fe08be5da16ad, UNASSIGN in 205 msec 2023-07-23 21:10:35,991 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=42 2023-07-23 21:10:35,991 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=42, state=SUCCESS; CloseRegionProcedure 2fecdee3dfc79f394ed8516b711aef23, server=jenkins-hbase4.apache.org,34893,1690146629259 in 197 msec 2023-07-23 21:10:35,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:35,993 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=41 2023-07-23 21:10:35,993 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2fecdee3dfc79f394ed8516b711aef23, UNASSIGN in 212 msec 2023-07-23 21:10:35,993 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; CloseRegionProcedure 2dae535cfa89620c3cfb25560d043112, server=jenkins-hbase4.apache.org,35321,1690146633061 in 198 msec 2023-07-23 21:10:35,994 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f. 2023-07-23 21:10:35,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for de18c46fae8e1a623e19caa9ecc4a54f: 2023-07-23 21:10:35,995 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2dae535cfa89620c3cfb25560d043112, UNASSIGN in 214 msec 2023-07-23 21:10:35,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:35,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:35,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7929c05b3320857451e9f37ff6b6234, disabling compactions & flushes 2023-07-23 21:10:35,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:35,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:35,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. after waiting 0 ms 2023-07-23 21:10:35,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:35,998 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=de18c46fae8e1a623e19caa9ecc4a54f, regionState=CLOSED 2023-07-23 21:10:35,998 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146635997"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146635997"}]},"ts":"1690146635997"} 2023-07-23 21:10:36,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=40 2023-07-23 21:10:36,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:36,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=40, state=SUCCESS; CloseRegionProcedure de18c46fae8e1a623e19caa9ecc4a54f, server=jenkins-hbase4.apache.org,34893,1690146629259 in 209 msec 2023-07-23 21:10:36,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234. 2023-07-23 21:10:36,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7929c05b3320857451e9f37ff6b6234: 2023-07-23 21:10:36,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de18c46fae8e1a623e19caa9ecc4a54f, UNASSIGN in 225 msec 2023-07-23 21:10:36,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:36,008 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=b7929c05b3320857451e9f37ff6b6234, regionState=CLOSED 2023-07-23 21:10:36,009 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146636008"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146636008"}]},"ts":"1690146636008"} 2023-07-23 21:10:36,013 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=39 2023-07-23 21:10:36,013 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=39, state=SUCCESS; CloseRegionProcedure b7929c05b3320857451e9f37ff6b6234, server=jenkins-hbase4.apache.org,34893,1690146629259 in 222 msec 2023-07-23 21:10:36,015 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=38 2023-07-23 21:10:36,015 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7929c05b3320857451e9f37ff6b6234, UNASSIGN in 234 msec 2023-07-23 21:10:36,016 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146636016"}]},"ts":"1690146636016"} 2023-07-23 21:10:36,018 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-23 21:10:36,021 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-23 21:10:36,024 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 259 msec 2023-07-23 21:10:36,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-23 21:10:36,079 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-23 21:10:36,080 INFO [Listener at localhost/39787] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:36,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:36,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-23 21:10:36,097 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-23 21:10:36,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 21:10:36,112 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:36,112 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:36,112 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:36,113 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:36,112 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:36,118 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/recovered.edits] 2023-07-23 21:10:36,118 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/recovered.edits] 2023-07-23 21:10:36,118 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/recovered.edits] 2023-07-23 21:10:36,118 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/recovered.edits] 2023-07-23 21:10:36,118 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/recovered.edits] 2023-07-23 21:10:36,135 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/recovered.edits/7.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f/recovered.edits/7.seqid 2023-07-23 21:10:36,135 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/recovered.edits/7.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112/recovered.edits/7.seqid 2023-07-23 21:10:36,136 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/recovered.edits/7.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23/recovered.edits/7.seqid 2023-07-23 21:10:36,136 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/recovered.edits/7.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234/recovered.edits/7.seqid 2023-07-23 21:10:36,137 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de18c46fae8e1a623e19caa9ecc4a54f 2023-07-23 21:10:36,138 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/recovered.edits/7.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad/recovered.edits/7.seqid 2023-07-23 21:10:36,138 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2fecdee3dfc79f394ed8516b711aef23 2023-07-23 21:10:36,138 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2dae535cfa89620c3cfb25560d043112 2023-07-23 21:10:36,138 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7929c05b3320857451e9f37ff6b6234 2023-07-23 21:10:36,139 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/53838dbf2294656bb16fe08be5da16ad 2023-07-23 21:10:36,139 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 21:10:36,174 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-23 21:10:36,179 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-23 21:10:36,179 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-23 21:10:36,180 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146636180"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:36,180 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146636180"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:36,180 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146636180"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:36,180 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146636180"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:36,180 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146636180"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:36,183 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-23 21:10:36,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b7929c05b3320857451e9f37ff6b6234, NAME => 'Group_testTableMoveTruncateAndDrop,,1690146633454.b7929c05b3320857451e9f37ff6b6234.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => de18c46fae8e1a623e19caa9ecc4a54f, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690146633454.de18c46fae8e1a623e19caa9ecc4a54f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 2dae535cfa89620c3cfb25560d043112, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146633454.2dae535cfa89620c3cfb25560d043112.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 2fecdee3dfc79f394ed8516b711aef23, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146633454.2fecdee3dfc79f394ed8516b711aef23.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 53838dbf2294656bb16fe08be5da16ad, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690146633454.53838dbf2294656bb16fe08be5da16ad.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-23 21:10:36,184 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-23 21:10:36,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146636184"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:36,186 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-23 21:10:36,194 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:36,194 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:36,194 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:36,194 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4 2023-07-23 21:10:36,194 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:36,195 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d empty. 2023-07-23 21:10:36,195 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1 empty. 2023-07-23 21:10:36,195 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226 empty. 2023-07-23 21:10:36,195 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d empty. 2023-07-23 21:10:36,195 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4 empty. 2023-07-23 21:10:36,196 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:36,196 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:36,196 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4 2023-07-23 21:10:36,196 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:36,196 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:36,196 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 21:10:36,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 21:10:36,220 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:36,222 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => a0aede0edbdbaae7fc57abbd2fd0173d, NAME => 'Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:36,222 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => b14508269b57178630b2aa37455745e4, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:36,225 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 23b0304928cf1d546329202cbb9a8226, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing b14508269b57178630b2aa37455745e4, disabling compactions & flushes 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing a0aede0edbdbaae7fc57abbd2fd0173d, disabling compactions & flushes 2023-07-23 21:10:36,279 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:36,279 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. after waiting 0 ms 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. after waiting 0 ms 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:36,279 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:36,280 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:36,280 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for a0aede0edbdbaae7fc57abbd2fd0173d: 2023-07-23 21:10:36,280 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:36,280 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for b14508269b57178630b2aa37455745e4: 2023-07-23 21:10:36,281 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0853b36ddf82ccd2bb3351769c47343d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:36,281 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 22f598946c69a6ae0394ca74915f4bb1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:36,283 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,283 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 23b0304928cf1d546329202cbb9a8226, disabling compactions & flushes 2023-07-23 21:10:36,283 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:36,283 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:36,283 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. after waiting 0 ms 2023-07-23 21:10:36,283 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:36,283 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:36,283 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 23b0304928cf1d546329202cbb9a8226: 2023-07-23 21:10:36,324 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,324 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 0853b36ddf82ccd2bb3351769c47343d, disabling compactions & flushes 2023-07-23 21:10:36,324 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:36,324 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:36,325 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. after waiting 0 ms 2023-07-23 21:10:36,325 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:36,325 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:36,325 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 0853b36ddf82ccd2bb3351769c47343d: 2023-07-23 21:10:36,327 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,327 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 22f598946c69a6ae0394ca74915f4bb1, disabling compactions & flushes 2023-07-23 21:10:36,327 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:36,327 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:36,327 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. after waiting 0 ms 2023-07-23 21:10:36,327 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:36,328 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:36,328 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 22f598946c69a6ae0394ca74915f4bb1: 2023-07-23 21:10:36,335 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146636334"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146636334"}]},"ts":"1690146636334"} 2023-07-23 21:10:36,335 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146636334"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146636334"}]},"ts":"1690146636334"} 2023-07-23 21:10:36,335 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146636334"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146636334"}]},"ts":"1690146636334"} 2023-07-23 21:10:36,335 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146636334"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146636334"}]},"ts":"1690146636334"} 2023-07-23 21:10:36,335 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146636334"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146636334"}]},"ts":"1690146636334"} 2023-07-23 21:10:36,339 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-23 21:10:36,340 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146636340"}]},"ts":"1690146636340"} 2023-07-23 21:10:36,342 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-23 21:10:36,347 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:36,348 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:36,348 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:36,348 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:36,350 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a0aede0edbdbaae7fc57abbd2fd0173d, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b14508269b57178630b2aa37455745e4, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23b0304928cf1d546329202cbb9a8226, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22f598946c69a6ae0394ca74915f4bb1, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0853b36ddf82ccd2bb3351769c47343d, ASSIGN}] 2023-07-23 21:10:36,353 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a0aede0edbdbaae7fc57abbd2fd0173d, ASSIGN 2023-07-23 21:10:36,353 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23b0304928cf1d546329202cbb9a8226, ASSIGN 2023-07-23 21:10:36,353 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b14508269b57178630b2aa37455745e4, ASSIGN 2023-07-23 21:10:36,353 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22f598946c69a6ae0394ca74915f4bb1, ASSIGN 2023-07-23 21:10:36,354 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a0aede0edbdbaae7fc57abbd2fd0173d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34893,1690146629259; forceNewPlan=false, retain=false 2023-07-23 21:10:36,355 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0853b36ddf82ccd2bb3351769c47343d, ASSIGN 2023-07-23 21:10:36,355 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23b0304928cf1d546329202cbb9a8226, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34893,1690146629259; forceNewPlan=false, retain=false 2023-07-23 21:10:36,355 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22f598946c69a6ae0394ca74915f4bb1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34893,1690146629259; forceNewPlan=false, retain=false 2023-07-23 21:10:36,355 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b14508269b57178630b2aa37455745e4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35321,1690146633061; forceNewPlan=false, retain=false 2023-07-23 21:10:36,357 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0853b36ddf82ccd2bb3351769c47343d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35321,1690146633061; forceNewPlan=false, retain=false 2023-07-23 21:10:36,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 21:10:36,505 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 21:10:36,509 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=0853b36ddf82ccd2bb3351769c47343d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:36,509 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=b14508269b57178630b2aa37455745e4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:36,509 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146636509"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146636509"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146636509"}]},"ts":"1690146636509"} 2023-07-23 21:10:36,509 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=22f598946c69a6ae0394ca74915f4bb1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:36,509 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146636509"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146636509"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146636509"}]},"ts":"1690146636509"} 2023-07-23 21:10:36,509 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146636509"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146636509"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146636509"}]},"ts":"1690146636509"} 2023-07-23 21:10:36,509 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=23b0304928cf1d546329202cbb9a8226, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:36,510 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=a0aede0edbdbaae7fc57abbd2fd0173d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:36,510 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146636509"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146636509"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146636509"}]},"ts":"1690146636509"} 2023-07-23 21:10:36,510 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146636509"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146636509"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146636509"}]},"ts":"1690146636509"} 2023-07-23 21:10:36,513 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=54, state=RUNNABLE; OpenRegionProcedure 0853b36ddf82ccd2bb3351769c47343d, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:36,514 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=51, state=RUNNABLE; OpenRegionProcedure b14508269b57178630b2aa37455745e4, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:36,519 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=53, state=RUNNABLE; OpenRegionProcedure 22f598946c69a6ae0394ca74915f4bb1, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:36,522 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=50, state=RUNNABLE; OpenRegionProcedure a0aede0edbdbaae7fc57abbd2fd0173d, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:36,527 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=52, state=RUNNABLE; OpenRegionProcedure 23b0304928cf1d546329202cbb9a8226, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:36,676 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:36,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0853b36ddf82ccd2bb3351769c47343d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 21:10:36,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:36,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:36,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:36,680 INFO [StoreOpener-0853b36ddf82ccd2bb3351769c47343d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:36,681 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:36,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 23b0304928cf1d546329202cbb9a8226, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 21:10:36,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:36,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:36,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:36,687 INFO [StoreOpener-23b0304928cf1d546329202cbb9a8226-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:36,688 DEBUG [StoreOpener-0853b36ddf82ccd2bb3351769c47343d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d/f 2023-07-23 21:10:36,688 DEBUG [StoreOpener-0853b36ddf82ccd2bb3351769c47343d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d/f 2023-07-23 21:10:36,689 DEBUG [StoreOpener-23b0304928cf1d546329202cbb9a8226-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226/f 2023-07-23 21:10:36,689 DEBUG [StoreOpener-23b0304928cf1d546329202cbb9a8226-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226/f 2023-07-23 21:10:36,689 INFO [StoreOpener-0853b36ddf82ccd2bb3351769c47343d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0853b36ddf82ccd2bb3351769c47343d columnFamilyName f 2023-07-23 21:10:36,689 INFO [StoreOpener-23b0304928cf1d546329202cbb9a8226-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 23b0304928cf1d546329202cbb9a8226 columnFamilyName f 2023-07-23 21:10:36,690 INFO [StoreOpener-0853b36ddf82ccd2bb3351769c47343d-1] regionserver.HStore(310): Store=0853b36ddf82ccd2bb3351769c47343d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:36,690 INFO [StoreOpener-23b0304928cf1d546329202cbb9a8226-1] regionserver.HStore(310): Store=23b0304928cf1d546329202cbb9a8226/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:36,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:36,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:36,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:36,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:36,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:36,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:36,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 21:10:36,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:36,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:36,711 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0853b36ddf82ccd2bb3351769c47343d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10056093920, jitterRate=-0.06345327198505402}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:36,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0853b36ddf82ccd2bb3351769c47343d: 2023-07-23 21:10:36,712 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 23b0304928cf1d546329202cbb9a8226; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11854220640, jitterRate=0.10401032865047455}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:36,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 23b0304928cf1d546329202cbb9a8226: 2023-07-23 21:10:36,712 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d., pid=55, masterSystemTime=1690146636671 2023-07-23 21:10:36,713 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226., pid=59, masterSystemTime=1690146636676 2023-07-23 21:10:36,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:36,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:36,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:36,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b14508269b57178630b2aa37455745e4, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 21:10:36,716 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=0853b36ddf82ccd2bb3351769c47343d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:36,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b14508269b57178630b2aa37455745e4 2023-07-23 21:10:36,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,716 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146636716"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146636716"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146636716"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146636716"}]},"ts":"1690146636716"} 2023-07-23 21:10:36,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b14508269b57178630b2aa37455745e4 2023-07-23 21:10:36,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b14508269b57178630b2aa37455745e4 2023-07-23 21:10:36,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:36,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:36,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:36,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a0aede0edbdbaae7fc57abbd2fd0173d, NAME => 'Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 21:10:36,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:36,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:36,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:36,720 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=23b0304928cf1d546329202cbb9a8226, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:36,720 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146636720"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146636720"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146636720"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146636720"}]},"ts":"1690146636720"} 2023-07-23 21:10:36,721 INFO [StoreOpener-b14508269b57178630b2aa37455745e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b14508269b57178630b2aa37455745e4 2023-07-23 21:10:36,725 DEBUG [StoreOpener-b14508269b57178630b2aa37455745e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4/f 2023-07-23 21:10:36,725 DEBUG [StoreOpener-b14508269b57178630b2aa37455745e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4/f 2023-07-23 21:10:36,726 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=54 2023-07-23 21:10:36,726 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=54, state=SUCCESS; OpenRegionProcedure 0853b36ddf82ccd2bb3351769c47343d, server=jenkins-hbase4.apache.org,35321,1690146633061 in 208 msec 2023-07-23 21:10:36,727 INFO [StoreOpener-b14508269b57178630b2aa37455745e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b14508269b57178630b2aa37455745e4 columnFamilyName f 2023-07-23 21:10:36,728 INFO [StoreOpener-b14508269b57178630b2aa37455745e4-1] regionserver.HStore(310): Store=b14508269b57178630b2aa37455745e4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:36,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=52 2023-07-23 21:10:36,730 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=52, state=SUCCESS; OpenRegionProcedure 23b0304928cf1d546329202cbb9a8226, server=jenkins-hbase4.apache.org,34893,1690146629259 in 195 msec 2023-07-23 21:10:36,731 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0853b36ddf82ccd2bb3351769c47343d, ASSIGN in 376 msec 2023-07-23 21:10:36,733 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23b0304928cf1d546329202cbb9a8226, ASSIGN in 380 msec 2023-07-23 21:10:36,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4 2023-07-23 21:10:36,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4 2023-07-23 21:10:36,737 INFO [StoreOpener-a0aede0edbdbaae7fc57abbd2fd0173d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:36,740 DEBUG [StoreOpener-a0aede0edbdbaae7fc57abbd2fd0173d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d/f 2023-07-23 21:10:36,740 DEBUG [StoreOpener-a0aede0edbdbaae7fc57abbd2fd0173d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d/f 2023-07-23 21:10:36,741 INFO [StoreOpener-a0aede0edbdbaae7fc57abbd2fd0173d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a0aede0edbdbaae7fc57abbd2fd0173d columnFamilyName f 2023-07-23 21:10:36,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b14508269b57178630b2aa37455745e4 2023-07-23 21:10:36,741 INFO [StoreOpener-a0aede0edbdbaae7fc57abbd2fd0173d-1] regionserver.HStore(310): Store=a0aede0edbdbaae7fc57abbd2fd0173d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:36,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:36,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:36,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:36,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:36,749 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b14508269b57178630b2aa37455745e4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11988623360, jitterRate=0.11652755737304688}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:36,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b14508269b57178630b2aa37455745e4: 2023-07-23 21:10:36,750 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4., pid=56, masterSystemTime=1690146636671 2023-07-23 21:10:36,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:36,753 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:36,755 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=b14508269b57178630b2aa37455745e4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:36,755 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146636755"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146636755"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146636755"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146636755"}]},"ts":"1690146636755"} 2023-07-23 21:10:36,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:36,761 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a0aede0edbdbaae7fc57abbd2fd0173d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11857681280, jitterRate=0.10433262586593628}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:36,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a0aede0edbdbaae7fc57abbd2fd0173d: 2023-07-23 21:10:36,762 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d., pid=58, masterSystemTime=1690146636676 2023-07-23 21:10:36,762 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=51 2023-07-23 21:10:36,763 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=51, state=SUCCESS; OpenRegionProcedure b14508269b57178630b2aa37455745e4, server=jenkins-hbase4.apache.org,35321,1690146633061 in 244 msec 2023-07-23 21:10:36,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:36,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:36,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:36,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 22f598946c69a6ae0394ca74915f4bb1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 21:10:36,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:36,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:36,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:36,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:36,766 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b14508269b57178630b2aa37455745e4, ASSIGN in 414 msec 2023-07-23 21:10:36,767 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=a0aede0edbdbaae7fc57abbd2fd0173d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:36,767 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146636767"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146636767"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146636767"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146636767"}]},"ts":"1690146636767"} 2023-07-23 21:10:36,768 INFO [StoreOpener-22f598946c69a6ae0394ca74915f4bb1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:36,770 DEBUG [StoreOpener-22f598946c69a6ae0394ca74915f4bb1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1/f 2023-07-23 21:10:36,770 DEBUG [StoreOpener-22f598946c69a6ae0394ca74915f4bb1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1/f 2023-07-23 21:10:36,772 INFO [StoreOpener-22f598946c69a6ae0394ca74915f4bb1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 22f598946c69a6ae0394ca74915f4bb1 columnFamilyName f 2023-07-23 21:10:36,772 INFO [StoreOpener-22f598946c69a6ae0394ca74915f4bb1-1] regionserver.HStore(310): Store=22f598946c69a6ae0394ca74915f4bb1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:36,773 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=50 2023-07-23 21:10:36,773 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=50, state=SUCCESS; OpenRegionProcedure a0aede0edbdbaae7fc57abbd2fd0173d, server=jenkins-hbase4.apache.org,34893,1690146629259 in 247 msec 2023-07-23 21:10:36,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:36,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:36,775 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a0aede0edbdbaae7fc57abbd2fd0173d, ASSIGN in 425 msec 2023-07-23 21:10:36,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:36,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:36,795 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 22f598946c69a6ae0394ca74915f4bb1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9609786080, jitterRate=-0.10501892864704132}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:36,796 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 22f598946c69a6ae0394ca74915f4bb1: 2023-07-23 21:10:36,797 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1., pid=57, masterSystemTime=1690146636676 2023-07-23 21:10:36,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:36,799 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:36,801 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=22f598946c69a6ae0394ca74915f4bb1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:36,801 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146636801"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146636801"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146636801"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146636801"}]},"ts":"1690146636801"} 2023-07-23 21:10:36,806 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=53 2023-07-23 21:10:36,807 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=53, state=SUCCESS; OpenRegionProcedure 22f598946c69a6ae0394ca74915f4bb1, server=jenkins-hbase4.apache.org,34893,1690146629259 in 284 msec 2023-07-23 21:10:36,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=49 2023-07-23 21:10:36,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22f598946c69a6ae0394ca74915f4bb1, ASSIGN in 456 msec 2023-07-23 21:10:36,809 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146636809"}]},"ts":"1690146636809"} 2023-07-23 21:10:36,816 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-23 21:10:36,819 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-23 21:10:36,824 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 730 msec 2023-07-23 21:10:37,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-23 21:10:37,210 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-23 21:10:37,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:37,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:37,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:37,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:37,213 INFO [Listener at localhost/39787] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-23 21:10:37,220 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146637220"}]},"ts":"1690146637220"} 2023-07-23 21:10:37,222 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-23 21:10:37,224 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-23 21:10:37,225 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a0aede0edbdbaae7fc57abbd2fd0173d, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b14508269b57178630b2aa37455745e4, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23b0304928cf1d546329202cbb9a8226, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22f598946c69a6ae0394ca74915f4bb1, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0853b36ddf82ccd2bb3351769c47343d, UNASSIGN}] 2023-07-23 21:10:37,227 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a0aede0edbdbaae7fc57abbd2fd0173d, UNASSIGN 2023-07-23 21:10:37,227 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b14508269b57178630b2aa37455745e4, UNASSIGN 2023-07-23 21:10:37,231 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0853b36ddf82ccd2bb3351769c47343d, UNASSIGN 2023-07-23 21:10:37,231 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22f598946c69a6ae0394ca74915f4bb1, UNASSIGN 2023-07-23 21:10:37,231 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=a0aede0edbdbaae7fc57abbd2fd0173d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:37,231 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23b0304928cf1d546329202cbb9a8226, UNASSIGN 2023-07-23 21:10:37,231 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=b14508269b57178630b2aa37455745e4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:37,231 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146637231"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146637231"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146637231"}]},"ts":"1690146637231"} 2023-07-23 21:10:37,231 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146637231"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146637231"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146637231"}]},"ts":"1690146637231"} 2023-07-23 21:10:37,233 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=0853b36ddf82ccd2bb3351769c47343d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:37,233 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=22f598946c69a6ae0394ca74915f4bb1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:37,233 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146637233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146637233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146637233"}]},"ts":"1690146637233"} 2023-07-23 21:10:37,233 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146637233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146637233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146637233"}]},"ts":"1690146637233"} 2023-07-23 21:10:37,234 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=23b0304928cf1d546329202cbb9a8226, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:37,234 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146637234"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146637234"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146637234"}]},"ts":"1690146637234"} 2023-07-23 21:10:37,235 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=62, state=RUNNABLE; CloseRegionProcedure b14508269b57178630b2aa37455745e4, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:37,237 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=61, state=RUNNABLE; CloseRegionProcedure a0aede0edbdbaae7fc57abbd2fd0173d, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:37,238 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=65, state=RUNNABLE; CloseRegionProcedure 0853b36ddf82ccd2bb3351769c47343d, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:37,240 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=64, state=RUNNABLE; CloseRegionProcedure 22f598946c69a6ae0394ca74915f4bb1, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:37,242 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=63, state=RUNNABLE; CloseRegionProcedure 23b0304928cf1d546329202cbb9a8226, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:37,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-23 21:10:37,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b14508269b57178630b2aa37455745e4 2023-07-23 21:10:37,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:37,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b14508269b57178630b2aa37455745e4, disabling compactions & flushes 2023-07-23 21:10:37,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:37,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:37,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. after waiting 0 ms 2023-07-23 21:10:37,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:37,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 22f598946c69a6ae0394ca74915f4bb1, disabling compactions & flushes 2023-07-23 21:10:37,394 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:37,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:37,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. after waiting 0 ms 2023-07-23 21:10:37,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:37,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:37,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4. 2023-07-23 21:10:37,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b14508269b57178630b2aa37455745e4: 2023-07-23 21:10:37,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:37,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b14508269b57178630b2aa37455745e4 2023-07-23 21:10:37,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:37,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0853b36ddf82ccd2bb3351769c47343d, disabling compactions & flushes 2023-07-23 21:10:37,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:37,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:37,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. after waiting 0 ms 2023-07-23 21:10:37,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:37,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1. 2023-07-23 21:10:37,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 22f598946c69a6ae0394ca74915f4bb1: 2023-07-23 21:10:37,420 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=b14508269b57178630b2aa37455745e4, regionState=CLOSED 2023-07-23 21:10:37,420 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146637420"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146637420"}]},"ts":"1690146637420"} 2023-07-23 21:10:37,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:37,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:37,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 23b0304928cf1d546329202cbb9a8226, disabling compactions & flushes 2023-07-23 21:10:37,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:37,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:37,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. after waiting 0 ms 2023-07-23 21:10:37,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:37,424 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:10:37,425 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=22f598946c69a6ae0394ca74915f4bb1, regionState=CLOSED 2023-07-23 21:10:37,425 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146637425"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146637425"}]},"ts":"1690146637425"} 2023-07-23 21:10:37,428 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=62 2023-07-23 21:10:37,428 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=62, state=SUCCESS; CloseRegionProcedure b14508269b57178630b2aa37455745e4, server=jenkins-hbase4.apache.org,35321,1690146633061 in 188 msec 2023-07-23 21:10:37,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b14508269b57178630b2aa37455745e4, UNASSIGN in 203 msec 2023-07-23 21:10:37,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:37,435 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=64 2023-07-23 21:10:37,436 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=64, state=SUCCESS; CloseRegionProcedure 22f598946c69a6ae0394ca74915f4bb1, server=jenkins-hbase4.apache.org,34893,1690146629259 in 189 msec 2023-07-23 21:10:37,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:37,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d. 2023-07-23 21:10:37,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226. 2023-07-23 21:10:37,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 23b0304928cf1d546329202cbb9a8226: 2023-07-23 21:10:37,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0853b36ddf82ccd2bb3351769c47343d: 2023-07-23 21:10:37,444 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22f598946c69a6ae0394ca74915f4bb1, UNASSIGN in 211 msec 2023-07-23 21:10:37,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:37,446 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=0853b36ddf82ccd2bb3351769c47343d, regionState=CLOSED 2023-07-23 21:10:37,446 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146637445"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146637445"}]},"ts":"1690146637445"} 2023-07-23 21:10:37,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:37,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:37,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a0aede0edbdbaae7fc57abbd2fd0173d, disabling compactions & flushes 2023-07-23 21:10:37,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:37,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:37,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. after waiting 0 ms 2023-07-23 21:10:37,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:37,454 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=23b0304928cf1d546329202cbb9a8226, regionState=CLOSED 2023-07-23 21:10:37,455 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690146637454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146637454"}]},"ts":"1690146637454"} 2023-07-23 21:10:37,471 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:37,471 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=65 2023-07-23 21:10:37,473 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=65, state=SUCCESS; CloseRegionProcedure 0853b36ddf82ccd2bb3351769c47343d, server=jenkins-hbase4.apache.org,35321,1690146633061 in 218 msec 2023-07-23 21:10:37,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d. 2023-07-23 21:10:37,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a0aede0edbdbaae7fc57abbd2fd0173d: 2023-07-23 21:10:37,474 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=63 2023-07-23 21:10:37,474 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=63, state=SUCCESS; CloseRegionProcedure 23b0304928cf1d546329202cbb9a8226, server=jenkins-hbase4.apache.org,34893,1690146629259 in 222 msec 2023-07-23 21:10:37,475 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0853b36ddf82ccd2bb3351769c47343d, UNASSIGN in 246 msec 2023-07-23 21:10:37,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:37,476 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=23b0304928cf1d546329202cbb9a8226, UNASSIGN in 249 msec 2023-07-23 21:10:37,476 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=a0aede0edbdbaae7fc57abbd2fd0173d, regionState=CLOSED 2023-07-23 21:10:37,477 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690146637476"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146637476"}]},"ts":"1690146637476"} 2023-07-23 21:10:37,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=61 2023-07-23 21:10:37,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=61, state=SUCCESS; CloseRegionProcedure a0aede0edbdbaae7fc57abbd2fd0173d, server=jenkins-hbase4.apache.org,34893,1690146629259 in 241 msec 2023-07-23 21:10:37,492 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-23 21:10:37,493 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a0aede0edbdbaae7fc57abbd2fd0173d, UNASSIGN in 263 msec 2023-07-23 21:10:37,493 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146637493"}]},"ts":"1690146637493"} 2023-07-23 21:10:37,496 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-23 21:10:37,498 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-23 21:10:37,504 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 285 msec 2023-07-23 21:10:37,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-23 21:10:37,523 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-23 21:10:37,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,541 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_49429789' 2023-07-23 21:10:37,554 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 21:10:37,555 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,555 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 21:10:37,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,556 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 21:10:37,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:37,560 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:10:37,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:37,561 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-23 21:10:37,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:37,562 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:10:37,562 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-23 21:10:37,563 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:10:37,563 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-23 21:10:37,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-23 21:10:37,572 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:37,572 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:37,572 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4 2023-07-23 21:10:37,573 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:37,573 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:37,580 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4/recovered.edits] 2023-07-23 21:10:37,580 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226/recovered.edits] 2023-07-23 21:10:37,580 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1/recovered.edits] 2023-07-23 21:10:37,580 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d/recovered.edits] 2023-07-23 21:10:37,581 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d/recovered.edits] 2023-07-23 21:10:37,597 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226/recovered.edits/4.seqid 2023-07-23 21:10:37,598 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/23b0304928cf1d546329202cbb9a8226 2023-07-23 21:10:37,599 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d/recovered.edits/4.seqid 2023-07-23 21:10:37,600 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1/recovered.edits/4.seqid 2023-07-23 21:10:37,600 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d/recovered.edits/4.seqid 2023-07-23 21:10:37,601 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0853b36ddf82ccd2bb3351769c47343d 2023-07-23 21:10:37,601 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4/recovered.edits/4.seqid 2023-07-23 21:10:37,601 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a0aede0edbdbaae7fc57abbd2fd0173d 2023-07-23 21:10:37,601 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22f598946c69a6ae0394ca74915f4bb1 2023-07-23 21:10:37,602 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b14508269b57178630b2aa37455745e4 2023-07-23 21:10:37,602 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 21:10:37,605 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,614 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-23 21:10:37,617 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-23 21:10:37,619 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,619 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-23 21:10:37,619 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146637619"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:37,619 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146637619"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:37,619 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146637619"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:37,619 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146637619"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:37,619 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146637619"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:37,622 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-23 21:10:37,622 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a0aede0edbdbaae7fc57abbd2fd0173d, NAME => 'Group_testTableMoveTruncateAndDrop,,1690146636141.a0aede0edbdbaae7fc57abbd2fd0173d.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => b14508269b57178630b2aa37455745e4, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690146636142.b14508269b57178630b2aa37455745e4.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 23b0304928cf1d546329202cbb9a8226, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690146636142.23b0304928cf1d546329202cbb9a8226.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 22f598946c69a6ae0394ca74915f4bb1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690146636142.22f598946c69a6ae0394ca74915f4bb1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 0853b36ddf82ccd2bb3351769c47343d, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690146636142.0853b36ddf82ccd2bb3351769c47343d.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-23 21:10:37,622 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-23 21:10:37,622 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146637622"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:37,624 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-23 21:10:37,627 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 21:10:37,629 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 96 msec 2023-07-23 21:10:37,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-23 21:10:37,673 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-23 21:10:37,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:37,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:37,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:37,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:37,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:37,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:34893] to rsgroup default 2023-07-23 21:10:37,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:37,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:37,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:37,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_49429789, current retry=0 2023-07-23 21:10:37,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061] are moved back to Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:37,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_49429789 => default 2023-07-23 21:10:37,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:37,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_49429789 2023-07-23 21:10:37,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:37,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:37,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:37,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:37,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:37,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:37,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:37,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:37,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:37,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:37,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:37,728 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:37,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:37,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:37,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:37,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:37,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:37,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:37,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147837749, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:37,751 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:37,753 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:37,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,755 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:37,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:37,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:37,783 INFO [Listener at localhost/39787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=493 (was 422) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a-prefix:jenkins-hbase4.apache.org,35321,1690146633061 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:46635 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1879807255-636-acceptor-0@1de837f4-ServerConnector@3feecd6d{HTTP/1.1, (http/1.1)}{0.0.0.0:35381} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59206@0x3850d5ef sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1360659748.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-4e85f979-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1879807255-635 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-404994070-172.31.14.131-1690146623480:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:35321-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_514328646_17 at /127.0.0.1:34592 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59206@0x3850d5ef-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1879807255-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1879807255-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35321Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_756688116_17 at /127.0.0.1:35202 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1879807255-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-308503825_17 at /127.0.0.1:58502 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_514328646_17 at /127.0.0.1:60372 [Receiving block BP-404994070-172.31.14.131-1690146623480:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-404994070-172.31.14.131-1690146623480:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59206@0x3850d5ef-SendThread(127.0.0.1:59206) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_514328646_17 at /127.0.0.1:34602 [Receiving block BP-404994070-172.31.14.131-1690146623480:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:35321 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:46635 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1879807255-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_514328646_17 at /127.0.0.1:41850 [Receiving block BP-404994070-172.31.14.131-1690146623480:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1879807255-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1879807255-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-404994070-172.31.14.131-1690146623480:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=771 (was 673) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=472 (was 472), ProcessCount=175 (was 175), AvailableMemoryMB=6597 (was 6211) - AvailableMemoryMB LEAK? - 2023-07-23 21:10:37,804 INFO [Listener at localhost/39787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=493, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=472, ProcessCount=175, AvailableMemoryMB=6596 2023-07-23 21:10:37,804 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-23 21:10:37,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:37,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:37,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:37,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:37,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:37,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:37,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:37,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:37,843 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:37,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:37,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:37,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:37,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:37,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:37,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:37,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147837868, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:37,869 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:37,871 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:37,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,873 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:37,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:37,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:37,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-23 21:10:37,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:37,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:56014 deadline: 1690147837875, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-23 21:10:37,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-23 21:10:37,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:37,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:56014 deadline: 1690147837877, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-23 21:10:37,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-23 21:10:37,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:37,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:56014 deadline: 1690147837878, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-23 21:10:37,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-23 21:10:37,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-23 21:10:37,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:37,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:37,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:37,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:37,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:37,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:37,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:37,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:37,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-23 21:10:37,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:37,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:37,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:37,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:37,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:37,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:37,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:37,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:37,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:37,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:37,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:37,935 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:37,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:37,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:37,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:37,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:37,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:37,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:37,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:37,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147837959, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:37,960 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:37,962 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:37,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:37,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:37,964 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:37,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:37,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:37,990 INFO [Listener at localhost/39787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=498 (was 493) Potentially hanging thread: hconnection-0x724df952-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=771 (was 771), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=539 (was 472) - SystemLoadAverage LEAK? -, ProcessCount=175 (was 175), AvailableMemoryMB=6538 (was 6596) 2023-07-23 21:10:38,019 INFO [Listener at localhost/39787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=498, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=539, ProcessCount=175, AvailableMemoryMB=6536 2023-07-23 21:10:38,019 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-23 21:10:38,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:38,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:38,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:38,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:38,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:38,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:38,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:38,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:38,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:38,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:38,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:38,041 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:38,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:38,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:38,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:38,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:38,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:38,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:38,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:38,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:38,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:38,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147838065, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:38,066 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:38,068 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:38,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:38,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:38,070 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:38,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:38,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:38,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:38,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:38,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:38,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:38,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-23 21:10:38,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:38,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 21:10:38,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:38,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:38,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:38,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:38,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:38,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:34893] to rsgroup bar 2023-07-23 21:10:38,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:38,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 21:10:38,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:38,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:38,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:38,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061, jenkins-hbase4.apache.org,37385,1690146629650] are moved back to default 2023-07-23 21:10:38,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-23 21:10:38,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:38,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:38,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:38,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-23 21:10:38,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:38,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:38,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-23 21:10:38,140 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:38,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 72 2023-07-23 21:10:38,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-23 21:10:38,142 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:38,143 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 21:10:38,144 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:38,144 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:38,147 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:38,149 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,150 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c empty. 2023-07-23 21:10:38,151 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,151 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-23 21:10:38,174 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:38,179 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => cd4223b58a432e72b3c1201a8e322a3c, NAME => 'Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:38,196 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:38,196 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing cd4223b58a432e72b3c1201a8e322a3c, disabling compactions & flushes 2023-07-23 21:10:38,196 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,196 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,196 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. after waiting 0 ms 2023-07-23 21:10:38,196 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,196 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,196 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for cd4223b58a432e72b3c1201a8e322a3c: 2023-07-23 21:10:38,199 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:38,200 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146638200"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146638200"}]},"ts":"1690146638200"} 2023-07-23 21:10:38,202 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:38,203 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:38,203 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146638203"}]},"ts":"1690146638203"} 2023-07-23 21:10:38,205 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-23 21:10:38,210 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, ASSIGN}] 2023-07-23 21:10:38,212 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, ASSIGN 2023-07-23 21:10:38,213 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=73, ppid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:38,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-23 21:10:38,365 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:38,365 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146638365"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146638365"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146638365"}]},"ts":"1690146638365"} 2023-07-23 21:10:38,368 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=73, state=RUNNABLE; OpenRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:38,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-23 21:10:38,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd4223b58a432e72b3c1201a8e322a3c, NAME => 'Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:38,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:38,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,532 INFO [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,534 DEBUG [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/f 2023-07-23 21:10:38,534 DEBUG [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/f 2023-07-23 21:10:38,535 INFO [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd4223b58a432e72b3c1201a8e322a3c columnFamilyName f 2023-07-23 21:10:38,536 INFO [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] regionserver.HStore(310): Store=cd4223b58a432e72b3c1201a8e322a3c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:38,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:38,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cd4223b58a432e72b3c1201a8e322a3c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10056478080, jitterRate=-0.06341749429702759}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:38,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cd4223b58a432e72b3c1201a8e322a3c: 2023-07-23 21:10:38,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c., pid=74, masterSystemTime=1690146638522 2023-07-23 21:10:38,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,562 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,567 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:38,567 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146638567"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146638567"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146638567"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146638567"}]},"ts":"1690146638567"} 2023-07-23 21:10:38,572 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=73 2023-07-23 21:10:38,572 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=73, state=SUCCESS; OpenRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,46093,1690146629455 in 202 msec 2023-07-23 21:10:38,576 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=72 2023-07-23 21:10:38,576 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=72, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, ASSIGN in 362 msec 2023-07-23 21:10:38,576 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:38,577 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146638577"}]},"ts":"1690146638577"} 2023-07-23 21:10:38,578 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-23 21:10:38,581 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=72, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:38,583 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 445 msec 2023-07-23 21:10:38,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=72 2023-07-23 21:10:38,750 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 72 completed 2023-07-23 21:10:38,750 DEBUG [Listener at localhost/39787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-23 21:10:38,751 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:38,760 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-23 21:10:38,760 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:38,761 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-23 21:10:38,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-23 21:10:38,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:38,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 21:10:38,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:38,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:38,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-23 21:10:38,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region cd4223b58a432e72b3c1201a8e322a3c to RSGroup bar 2023-07-23 21:10:38,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:38,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:38,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:38,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:38,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 21:10:38,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:38,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, REOPEN/MOVE 2023-07-23 21:10:38,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-23 21:10:38,778 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, REOPEN/MOVE 2023-07-23 21:10:38,783 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:38,783 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146638783"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146638783"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146638783"}]},"ts":"1690146638783"} 2023-07-23 21:10:38,786 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; CloseRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:38,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cd4223b58a432e72b3c1201a8e322a3c, disabling compactions & flushes 2023-07-23 21:10:38,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. after waiting 0 ms 2023-07-23 21:10:38,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,948 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:38,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:38,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cd4223b58a432e72b3c1201a8e322a3c: 2023-07-23 21:10:38,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cd4223b58a432e72b3c1201a8e322a3c move to jenkins-hbase4.apache.org,37385,1690146629650 record at close sequenceid=2 2023-07-23 21:10:38,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:38,951 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=CLOSED 2023-07-23 21:10:38,951 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146638951"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146638951"}]},"ts":"1690146638951"} 2023-07-23 21:10:38,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-23 21:10:38,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; CloseRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,46093,1690146629455 in 166 msec 2023-07-23 21:10:38,957 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37385,1690146629650; forceNewPlan=false, retain=false 2023-07-23 21:10:39,107 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:39,108 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:39,108 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146639108"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146639108"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146639108"}]},"ts":"1690146639108"} 2023-07-23 21:10:39,113 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; OpenRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:39,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:39,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd4223b58a432e72b3c1201a8e322a3c, NAME => 'Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:39,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:39,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:39,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:39,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:39,273 INFO [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:39,274 DEBUG [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/f 2023-07-23 21:10:39,274 DEBUG [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/f 2023-07-23 21:10:39,274 INFO [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd4223b58a432e72b3c1201a8e322a3c columnFamilyName f 2023-07-23 21:10:39,275 INFO [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] regionserver.HStore(310): Store=cd4223b58a432e72b3c1201a8e322a3c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:39,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:39,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:39,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:39,281 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cd4223b58a432e72b3c1201a8e322a3c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10495348000, jitterRate=-0.02254454791545868}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:39,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cd4223b58a432e72b3c1201a8e322a3c: 2023-07-23 21:10:39,282 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c., pid=77, masterSystemTime=1690146639265 2023-07-23 21:10:39,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:39,284 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:39,285 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:39,285 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146639285"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146639285"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146639285"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146639285"}]},"ts":"1690146639285"} 2023-07-23 21:10:39,289 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-23 21:10:39,289 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; OpenRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,37385,1690146629650 in 177 msec 2023-07-23 21:10:39,294 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, REOPEN/MOVE in 514 msec 2023-07-23 21:10:39,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-23 21:10:39,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-23 21:10:39,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:39,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:39,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:39,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-23 21:10:39,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:39,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-23 21:10:39,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:39,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 284 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:56014 deadline: 1690147839785, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-23 21:10:39,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:34893] to rsgroup default 2023-07-23 21:10:39,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:39,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:56014 deadline: 1690147839787, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-23 21:10:39,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-23 21:10:39,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:39,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 21:10:39,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:39,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:39,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-23 21:10:39,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region cd4223b58a432e72b3c1201a8e322a3c to RSGroup default 2023-07-23 21:10:39,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, REOPEN/MOVE 2023-07-23 21:10:39,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 21:10:39,796 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, REOPEN/MOVE 2023-07-23 21:10:39,797 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:39,797 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146639797"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146639797"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146639797"}]},"ts":"1690146639797"} 2023-07-23 21:10:39,798 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:39,951 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:39,952 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cd4223b58a432e72b3c1201a8e322a3c, disabling compactions & flushes 2023-07-23 21:10:39,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:39,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:39,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. after waiting 0 ms 2023-07-23 21:10:39,953 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:39,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:39,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:39,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cd4223b58a432e72b3c1201a8e322a3c: 2023-07-23 21:10:39,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cd4223b58a432e72b3c1201a8e322a3c move to jenkins-hbase4.apache.org,46093,1690146629455 record at close sequenceid=5 2023-07-23 21:10:39,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:39,961 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=CLOSED 2023-07-23 21:10:39,961 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146639961"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146639961"}]},"ts":"1690146639961"} 2023-07-23 21:10:39,964 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-23 21:10:39,964 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,37385,1690146629650 in 164 msec 2023-07-23 21:10:39,965 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:40,116 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:40,116 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146640115"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146640115"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146640115"}]},"ts":"1690146640115"} 2023-07-23 21:10:40,118 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:40,274 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:40,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cd4223b58a432e72b3c1201a8e322a3c, NAME => 'Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:40,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:40,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:40,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:40,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:40,277 INFO [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:40,278 DEBUG [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/f 2023-07-23 21:10:40,278 DEBUG [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/f 2023-07-23 21:10:40,279 INFO [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cd4223b58a432e72b3c1201a8e322a3c columnFamilyName f 2023-07-23 21:10:40,279 INFO [StoreOpener-cd4223b58a432e72b3c1201a8e322a3c-1] regionserver.HStore(310): Store=cd4223b58a432e72b3c1201a8e322a3c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:40,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:40,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:40,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:40,286 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cd4223b58a432e72b3c1201a8e322a3c; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11013092640, jitterRate=0.025674179196357727}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:40,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cd4223b58a432e72b3c1201a8e322a3c: 2023-07-23 21:10:40,287 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c., pid=80, masterSystemTime=1690146640269 2023-07-23 21:10:40,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:40,290 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:40,290 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:40,290 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146640290"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146640290"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146640290"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146640290"}]},"ts":"1690146640290"} 2023-07-23 21:10:40,295 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-23 21:10:40,295 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,46093,1690146629455 in 175 msec 2023-07-23 21:10:40,297 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, REOPEN/MOVE in 501 msec 2023-07-23 21:10:40,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-23 21:10:40,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-23 21:10:40,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:40,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:40,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:40,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-23 21:10:40,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:40,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 293 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:56014 deadline: 1690147840802, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-23 21:10:40,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:34893] to rsgroup default 2023-07-23 21:10:40,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:40,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 21:10:40,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:40,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:40,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-23 21:10:40,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061, jenkins-hbase4.apache.org,37385,1690146629650] are moved back to bar 2023-07-23 21:10:40,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-23 21:10:40,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:40,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:40,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:40,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-23 21:10:40,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:40,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:40,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:40,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:40,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:40,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:40,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:40,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:40,828 INFO [Listener at localhost/39787] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-23 21:10:40,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-23 21:10:40,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-23 21:10:40,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-23 21:10:40,832 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146640832"}]},"ts":"1690146640832"} 2023-07-23 21:10:40,834 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-23 21:10:40,835 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-23 21:10:40,836 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, UNASSIGN}] 2023-07-23 21:10:40,838 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, UNASSIGN 2023-07-23 21:10:40,839 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:40,839 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146640839"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146640839"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146640839"}]},"ts":"1690146640839"} 2023-07-23 21:10:40,841 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; CloseRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:40,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-23 21:10:40,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:40,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cd4223b58a432e72b3c1201a8e322a3c, disabling compactions & flushes 2023-07-23 21:10:40,994 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:40,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:40,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. after waiting 0 ms 2023-07-23 21:10:40,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:40,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 21:10:40,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c. 2023-07-23 21:10:40,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cd4223b58a432e72b3c1201a8e322a3c: 2023-07-23 21:10:41,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:41,001 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=cd4223b58a432e72b3c1201a8e322a3c, regionState=CLOSED 2023-07-23 21:10:41,001 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690146641001"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146641001"}]},"ts":"1690146641001"} 2023-07-23 21:10:41,005 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-23 21:10:41,005 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; CloseRegionProcedure cd4223b58a432e72b3c1201a8e322a3c, server=jenkins-hbase4.apache.org,46093,1690146629455 in 162 msec 2023-07-23 21:10:41,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-23 21:10:41,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cd4223b58a432e72b3c1201a8e322a3c, UNASSIGN in 169 msec 2023-07-23 21:10:41,008 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146641007"}]},"ts":"1690146641007"} 2023-07-23 21:10:41,009 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-23 21:10:41,011 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-23 21:10:41,013 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 184 msec 2023-07-23 21:10:41,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-23 21:10:41,135 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-23 21:10:41,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-23 21:10:41,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 21:10:41,139 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=84, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 21:10:41,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-23 21:10:41,140 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=84, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 21:10:41,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:41,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:41,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:41,145 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:41,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-23 21:10:41,148 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/recovered.edits] 2023-07-23 21:10:41,155 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/recovered.edits/10.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c/recovered.edits/10.seqid 2023-07-23 21:10:41,156 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testFailRemoveGroup/cd4223b58a432e72b3c1201a8e322a3c 2023-07-23 21:10:41,156 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-23 21:10:41,159 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=84, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 21:10:41,162 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-23 21:10:41,164 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-23 21:10:41,165 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=84, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 21:10:41,165 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-23 21:10:41,165 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146641165"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:41,167 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:41,168 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cd4223b58a432e72b3c1201a8e322a3c, NAME => 'Group_testFailRemoveGroup,,1690146638136.cd4223b58a432e72b3c1201a8e322a3c.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:41,168 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-23 21:10:41,168 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146641168"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:41,171 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-23 21:10:41,173 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=84, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 21:10:41,174 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 37 msec 2023-07-23 21:10:41,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-23 21:10:41,248 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-23 21:10:41,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:41,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:41,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:41,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:41,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:41,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:41,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:41,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:41,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:41,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:41,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:41,266 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:41,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:41,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:41,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:41,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:41,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:41,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:41,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:41,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:41,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:41,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 341 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147841288, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:41,289 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:41,291 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:41,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:41,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:41,292 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:41,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:41,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:41,309 INFO [Listener at localhost/39787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=500 (was 498) Potentially hanging thread: hconnection-0x2a5e2fc3-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-951838729_17 at /127.0.0.1:34592 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_756688116_17 at /127.0.0.1:60494 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=769 (was 771), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=539 (was 539), ProcessCount=175 (was 175), AvailableMemoryMB=6381 (was 6536) 2023-07-23 21:10:41,332 INFO [Listener at localhost/39787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=500, OpenFileDescriptor=769, MaxFileDescriptor=60000, SystemLoadAverage=539, ProcessCount=175, AvailableMemoryMB=6380 2023-07-23 21:10:41,332 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-23 21:10:41,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:41,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:41,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:41,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:41,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:41,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:41,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:41,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:41,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:41,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:41,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:41,352 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:41,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:41,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:41,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:41,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:41,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:41,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:41,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:41,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:41,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:41,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 369 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147841364, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:41,365 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:41,369 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:41,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:41,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:41,370 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:41,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:41,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:41,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:41,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:41,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_52710846 2023-07-23 21:10:41,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:41,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:41,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_52710846 2023-07-23 21:10:41,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:41,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:41,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:41,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:41,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34893] to rsgroup Group_testMultiTableMove_52710846 2023-07-23 21:10:41,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:41,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:41,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_52710846 2023-07-23 21:10:41,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:41,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:41,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259] are moved back to default 2023-07-23 21:10:41,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_52710846 2023-07-23 21:10:41,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:41,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:41,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:41,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_52710846 2023-07-23 21:10:41,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:41,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:41,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=85, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 21:10:41,399 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:41,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 85 2023-07-23 21:10:41,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-23 21:10:41,402 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:41,402 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:41,403 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_52710846 2023-07-23 21:10:41,403 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:41,408 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:41,409 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:41,410 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d empty. 2023-07-23 21:10:41,411 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:41,411 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-23 21:10:41,435 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:41,436 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 08eedeecb332b524f934cb7590dc490d, NAME => 'GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:41,451 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:41,451 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 08eedeecb332b524f934cb7590dc490d, disabling compactions & flushes 2023-07-23 21:10:41,451 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:41,451 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:41,451 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. after waiting 0 ms 2023-07-23 21:10:41,451 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:41,451 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:41,451 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 08eedeecb332b524f934cb7590dc490d: 2023-07-23 21:10:41,454 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:41,455 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146641455"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146641455"}]},"ts":"1690146641455"} 2023-07-23 21:10:41,457 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:41,458 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:41,458 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146641458"}]},"ts":"1690146641458"} 2023-07-23 21:10:41,459 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-23 21:10:41,462 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:41,463 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:41,463 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:41,463 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:41,463 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:41,463 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, ASSIGN}] 2023-07-23 21:10:41,467 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, ASSIGN 2023-07-23 21:10:41,468 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35321,1690146633061; forceNewPlan=false, retain=false 2023-07-23 21:10:41,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-23 21:10:41,619 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:41,620 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=08eedeecb332b524f934cb7590dc490d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:41,620 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146641620"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146641620"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146641620"}]},"ts":"1690146641620"} 2023-07-23 21:10:41,623 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=87, ppid=86, state=RUNNABLE; OpenRegionProcedure 08eedeecb332b524f934cb7590dc490d, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:41,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-23 21:10:41,779 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:41,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 08eedeecb332b524f934cb7590dc490d, NAME => 'GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:41,780 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:41,780 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:41,780 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:41,780 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:41,781 INFO [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:41,783 DEBUG [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/f 2023-07-23 21:10:41,783 DEBUG [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/f 2023-07-23 21:10:41,783 INFO [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 08eedeecb332b524f934cb7590dc490d columnFamilyName f 2023-07-23 21:10:41,784 INFO [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] regionserver.HStore(310): Store=08eedeecb332b524f934cb7590dc490d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:41,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:41,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:41,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:41,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:41,793 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 08eedeecb332b524f934cb7590dc490d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10938399680, jitterRate=0.018717855215072632}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:41,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 08eedeecb332b524f934cb7590dc490d: 2023-07-23 21:10:41,794 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d., pid=87, masterSystemTime=1690146641775 2023-07-23 21:10:41,796 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:41,796 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:41,796 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=08eedeecb332b524f934cb7590dc490d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:41,796 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146641796"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146641796"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146641796"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146641796"}]},"ts":"1690146641796"} 2023-07-23 21:10:41,800 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=87, resume processing ppid=86 2023-07-23 21:10:41,800 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, ppid=86, state=SUCCESS; OpenRegionProcedure 08eedeecb332b524f934cb7590dc490d, server=jenkins-hbase4.apache.org,35321,1690146633061 in 175 msec 2023-07-23 21:10:41,802 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-23 21:10:41,802 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, ASSIGN in 337 msec 2023-07-23 21:10:41,802 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:41,803 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146641803"}]},"ts":"1690146641803"} 2023-07-23 21:10:41,812 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-23 21:10:41,818 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=85, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:41,820 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=85, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 421 msec 2023-07-23 21:10:42,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-23 21:10:42,005 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 85 completed 2023-07-23 21:10:42,005 DEBUG [Listener at localhost/39787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-23 21:10:42,005 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:42,013 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-23 21:10:42,013 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:42,013 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-23 21:10:42,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:42,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 21:10:42,018 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:42,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 88 2023-07-23 21:10:42,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 21:10:42,021 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:42,021 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:42,022 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_52710846 2023-07-23 21:10:42,022 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:42,029 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:42,031 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,031 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45 empty. 2023-07-23 21:10:42,032 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,032 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-23 21:10:42,065 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:42,067 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 166563a376cb7a932e3e2754368bcf45, NAME => 'GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:42,084 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:42,084 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 166563a376cb7a932e3e2754368bcf45, disabling compactions & flushes 2023-07-23 21:10:42,084 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,084 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,084 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. after waiting 0 ms 2023-07-23 21:10:42,084 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,084 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,084 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 166563a376cb7a932e3e2754368bcf45: 2023-07-23 21:10:42,087 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:42,088 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146642088"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146642088"}]},"ts":"1690146642088"} 2023-07-23 21:10:42,089 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:42,090 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:42,090 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146642090"}]},"ts":"1690146642090"} 2023-07-23 21:10:42,091 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-23 21:10:42,095 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:42,095 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:42,095 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:42,095 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:42,095 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:42,095 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, ASSIGN}] 2023-07-23 21:10:42,097 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, ASSIGN 2023-07-23 21:10:42,098 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=89, ppid=88, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35321,1690146633061; forceNewPlan=false, retain=false 2023-07-23 21:10:42,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 21:10:42,248 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:42,250 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=166563a376cb7a932e3e2754368bcf45, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:42,250 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146642249"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146642249"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146642249"}]},"ts":"1690146642249"} 2023-07-23 21:10:42,252 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE; OpenRegionProcedure 166563a376cb7a932e3e2754368bcf45, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:42,268 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:10:42,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 21:10:42,408 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 166563a376cb7a932e3e2754368bcf45, NAME => 'GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:42,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:42,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,410 INFO [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,412 DEBUG [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/f 2023-07-23 21:10:42,412 DEBUG [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/f 2023-07-23 21:10:42,413 INFO [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 166563a376cb7a932e3e2754368bcf45 columnFamilyName f 2023-07-23 21:10:42,414 INFO [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] regionserver.HStore(310): Store=166563a376cb7a932e3e2754368bcf45/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:42,416 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:42,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 166563a376cb7a932e3e2754368bcf45; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10211470400, jitterRate=-0.048982709646224976}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:42,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 166563a376cb7a932e3e2754368bcf45: 2023-07-23 21:10:42,426 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45., pid=90, masterSystemTime=1690146642403 2023-07-23 21:10:42,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,432 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=89 updating hbase:meta row=166563a376cb7a932e3e2754368bcf45, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:42,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,432 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146642432"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146642432"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146642432"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146642432"}]},"ts":"1690146642432"} 2023-07-23 21:10:42,436 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-23 21:10:42,436 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; OpenRegionProcedure 166563a376cb7a932e3e2754368bcf45, server=jenkins-hbase4.apache.org,35321,1690146633061 in 182 msec 2023-07-23 21:10:42,437 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-23 21:10:42,438 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, ASSIGN in 341 msec 2023-07-23 21:10:42,438 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:42,438 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146642438"}]},"ts":"1690146642438"} 2023-07-23 21:10:42,440 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-23 21:10:42,442 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=88, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:42,444 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 427 msec 2023-07-23 21:10:42,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-23 21:10:42,624 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 88 completed 2023-07-23 21:10:42,624 DEBUG [Listener at localhost/39787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-23 21:10:42,625 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:42,629 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-23 21:10:42,629 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:42,630 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-23 21:10:42,630 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:42,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-23 21:10:42,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:42,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-23 21:10:42,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:42,648 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_52710846 2023-07-23 21:10:42,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_52710846 2023-07-23 21:10:42,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:42,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:42,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_52710846 2023-07-23 21:10:42,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:42,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_52710846 2023-07-23 21:10:42,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region 166563a376cb7a932e3e2754368bcf45 to RSGroup Group_testMultiTableMove_52710846 2023-07-23 21:10:42,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, REOPEN/MOVE 2023-07-23 21:10:42,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_52710846 2023-07-23 21:10:42,659 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, REOPEN/MOVE 2023-07-23 21:10:42,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region 08eedeecb332b524f934cb7590dc490d to RSGroup Group_testMultiTableMove_52710846 2023-07-23 21:10:42,660 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=166563a376cb7a932e3e2754368bcf45, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:42,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=92, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, REOPEN/MOVE 2023-07-23 21:10:42,660 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146642660"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146642660"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146642660"}]},"ts":"1690146642660"} 2023-07-23 21:10:42,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_52710846, current retry=0 2023-07-23 21:10:42,661 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, REOPEN/MOVE 2023-07-23 21:10:42,662 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=08eedeecb332b524f934cb7590dc490d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:42,662 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146642662"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146642662"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146642662"}]},"ts":"1690146642662"} 2023-07-23 21:10:42,663 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=91, state=RUNNABLE; CloseRegionProcedure 166563a376cb7a932e3e2754368bcf45, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:42,666 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=92, state=RUNNABLE; CloseRegionProcedure 08eedeecb332b524f934cb7590dc490d, server=jenkins-hbase4.apache.org,35321,1690146633061}] 2023-07-23 21:10:42,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:42,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 08eedeecb332b524f934cb7590dc490d, disabling compactions & flushes 2023-07-23 21:10:42,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:42,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:42,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. after waiting 0 ms 2023-07-23 21:10:42,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:42,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:42,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:42,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 08eedeecb332b524f934cb7590dc490d: 2023-07-23 21:10:42,826 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 08eedeecb332b524f934cb7590dc490d move to jenkins-hbase4.apache.org,34893,1690146629259 record at close sequenceid=2 2023-07-23 21:10:42,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:42,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 166563a376cb7a932e3e2754368bcf45, disabling compactions & flushes 2023-07-23 21:10:42,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. after waiting 0 ms 2023-07-23 21:10:42,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,830 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=08eedeecb332b524f934cb7590dc490d, regionState=CLOSED 2023-07-23 21:10:42,830 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146642830"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146642830"}]},"ts":"1690146642830"} 2023-07-23 21:10:42,834 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=92 2023-07-23 21:10:42,834 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=92, state=SUCCESS; CloseRegionProcedure 08eedeecb332b524f934cb7590dc490d, server=jenkins-hbase4.apache.org,35321,1690146633061 in 168 msec 2023-07-23 21:10:42,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:42,835 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=92, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34893,1690146629259; forceNewPlan=false, retain=false 2023-07-23 21:10:42,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:42,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 166563a376cb7a932e3e2754368bcf45: 2023-07-23 21:10:42,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 166563a376cb7a932e3e2754368bcf45 move to jenkins-hbase4.apache.org,34893,1690146629259 record at close sequenceid=2 2023-07-23 21:10:42,838 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:42,838 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=166563a376cb7a932e3e2754368bcf45, regionState=CLOSED 2023-07-23 21:10:42,838 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146642838"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146642838"}]},"ts":"1690146642838"} 2023-07-23 21:10:42,842 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=91 2023-07-23 21:10:42,842 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=91, state=SUCCESS; CloseRegionProcedure 166563a376cb7a932e3e2754368bcf45, server=jenkins-hbase4.apache.org,35321,1690146633061 in 176 msec 2023-07-23 21:10:42,843 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34893,1690146629259; forceNewPlan=false, retain=false 2023-07-23 21:10:42,985 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=166563a376cb7a932e3e2754368bcf45, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:42,985 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=08eedeecb332b524f934cb7590dc490d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:42,986 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146642985"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146642985"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146642985"}]},"ts":"1690146642985"} 2023-07-23 21:10:42,986 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146642985"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146642985"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146642985"}]},"ts":"1690146642985"} 2023-07-23 21:10:42,987 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=91, state=RUNNABLE; OpenRegionProcedure 166563a376cb7a932e3e2754368bcf45, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:42,988 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=92, state=RUNNABLE; OpenRegionProcedure 08eedeecb332b524f934cb7590dc490d, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:43,143 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:43,143 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 166563a376cb7a932e3e2754368bcf45, NAME => 'GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:43,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:43,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:43,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:43,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:43,150 INFO [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:43,151 DEBUG [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/f 2023-07-23 21:10:43,151 DEBUG [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/f 2023-07-23 21:10:43,152 INFO [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 166563a376cb7a932e3e2754368bcf45 columnFamilyName f 2023-07-23 21:10:43,153 INFO [StoreOpener-166563a376cb7a932e3e2754368bcf45-1] regionserver.HStore(310): Store=166563a376cb7a932e3e2754368bcf45/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:43,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:43,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:43,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:43,161 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 166563a376cb7a932e3e2754368bcf45; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11597152800, jitterRate=0.08006902039051056}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:43,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 166563a376cb7a932e3e2754368bcf45: 2023-07-23 21:10:43,162 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45., pid=95, masterSystemTime=1690146643139 2023-07-23 21:10:43,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:43,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:43,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:43,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 08eedeecb332b524f934cb7590dc490d, NAME => 'GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:43,165 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=166563a376cb7a932e3e2754368bcf45, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:43,165 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146643164"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146643164"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146643164"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146643164"}]},"ts":"1690146643164"} 2023-07-23 21:10:43,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:43,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,168 INFO [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,168 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=91 2023-07-23 21:10:43,168 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=91, state=SUCCESS; OpenRegionProcedure 166563a376cb7a932e3e2754368bcf45, server=jenkins-hbase4.apache.org,34893,1690146629259 in 179 msec 2023-07-23 21:10:43,170 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, REOPEN/MOVE in 512 msec 2023-07-23 21:10:43,170 DEBUG [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/f 2023-07-23 21:10:43,170 DEBUG [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/f 2023-07-23 21:10:43,171 INFO [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 08eedeecb332b524f934cb7590dc490d columnFamilyName f 2023-07-23 21:10:43,171 INFO [StoreOpener-08eedeecb332b524f934cb7590dc490d-1] regionserver.HStore(310): Store=08eedeecb332b524f934cb7590dc490d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:43,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,183 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 08eedeecb332b524f934cb7590dc490d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11015181760, jitterRate=0.025868743658065796}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:43,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 08eedeecb332b524f934cb7590dc490d: 2023-07-23 21:10:43,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d., pid=96, masterSystemTime=1690146643139 2023-07-23 21:10:43,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:43,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:43,186 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=08eedeecb332b524f934cb7590dc490d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:43,186 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146643186"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146643186"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146643186"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146643186"}]},"ts":"1690146643186"} 2023-07-23 21:10:43,190 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=92 2023-07-23 21:10:43,191 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=92, state=SUCCESS; OpenRegionProcedure 08eedeecb332b524f934cb7590dc490d, server=jenkins-hbase4.apache.org,34893,1690146629259 in 200 msec 2023-07-23 21:10:43,195 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=92, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, REOPEN/MOVE in 530 msec 2023-07-23 21:10:43,551 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveB' 2023-07-23 21:10:43,552 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveA' 2023-07-23 21:10:43,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure.ProcedureSyncWait(216): waitFor pid=91 2023-07-23 21:10:43,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_52710846. 2023-07-23 21:10:43,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:43,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:43,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:43,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-23 21:10:43,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:43,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-23 21:10:43,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:43,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:43,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:43,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_52710846 2023-07-23 21:10:43,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:43,676 INFO [Listener at localhost/39787] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-23 21:10:43,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-23 21:10:43,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 21:10:43,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-23 21:10:43,681 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146643680"}]},"ts":"1690146643680"} 2023-07-23 21:10:43,682 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-23 21:10:43,684 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-23 21:10:43,685 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, UNASSIGN}] 2023-07-23 21:10:43,687 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, UNASSIGN 2023-07-23 21:10:43,688 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=08eedeecb332b524f934cb7590dc490d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:43,688 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146643688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146643688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146643688"}]},"ts":"1690146643688"} 2023-07-23 21:10:43,689 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; CloseRegionProcedure 08eedeecb332b524f934cb7590dc490d, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:43,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-23 21:10:43,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 08eedeecb332b524f934cb7590dc490d, disabling compactions & flushes 2023-07-23 21:10:43,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:43,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:43,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. after waiting 0 ms 2023-07-23 21:10:43,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:43,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:43,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d. 2023-07-23 21:10:43,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 08eedeecb332b524f934cb7590dc490d: 2023-07-23 21:10:43,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,851 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=08eedeecb332b524f934cb7590dc490d, regionState=CLOSED 2023-07-23 21:10:43,852 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146643851"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146643851"}]},"ts":"1690146643851"} 2023-07-23 21:10:43,857 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-23 21:10:43,857 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; CloseRegionProcedure 08eedeecb332b524f934cb7590dc490d, server=jenkins-hbase4.apache.org,34893,1690146629259 in 166 msec 2023-07-23 21:10:43,858 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-23 21:10:43,858 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=08eedeecb332b524f934cb7590dc490d, UNASSIGN in 172 msec 2023-07-23 21:10:43,859 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146643859"}]},"ts":"1690146643859"} 2023-07-23 21:10:43,861 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-23 21:10:43,863 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-23 21:10:43,866 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 188 msec 2023-07-23 21:10:43,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-23 21:10:43,983 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-23 21:10:43,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-23 21:10:43,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 21:10:43,986 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=100, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 21:10:43,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_52710846' 2023-07-23 21:10:43,987 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=100, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 21:10:43,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:43,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:43,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_52710846 2023-07-23 21:10:43,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:43,991 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-23 21:10:43,993 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/recovered.edits] 2023-07-23 21:10:43,999 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/recovered.edits/7.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d/recovered.edits/7.seqid 2023-07-23 21:10:43,999 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveA/08eedeecb332b524f934cb7590dc490d 2023-07-23 21:10:43,999 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-23 21:10:44,001 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=100, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 21:10:44,003 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-23 21:10:44,005 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-23 21:10:44,006 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=100, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 21:10:44,006 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-23 21:10:44,006 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146644006"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:44,007 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:44,007 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 08eedeecb332b524f934cb7590dc490d, NAME => 'GrouptestMultiTableMoveA,,1690146641396.08eedeecb332b524f934cb7590dc490d.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:44,008 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-23 21:10:44,008 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146644008"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:44,009 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-23 21:10:44,010 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=100, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 21:10:44,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 27 msec 2023-07-23 21:10:44,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-23 21:10:44,094 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 100 completed 2023-07-23 21:10:44,095 INFO [Listener at localhost/39787] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-23 21:10:44,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-23 21:10:44,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 21:10:44,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-23 21:10:44,107 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146644107"}]},"ts":"1690146644107"} 2023-07-23 21:10:44,109 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-23 21:10:44,110 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-23 21:10:44,111 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, UNASSIGN}] 2023-07-23 21:10:44,113 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, UNASSIGN 2023-07-23 21:10:44,113 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=166563a376cb7a932e3e2754368bcf45, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:44,114 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146644113"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146644113"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146644113"}]},"ts":"1690146644113"} 2023-07-23 21:10:44,115 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=102, state=RUNNABLE; CloseRegionProcedure 166563a376cb7a932e3e2754368bcf45, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:44,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-23 21:10:44,267 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:44,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 166563a376cb7a932e3e2754368bcf45, disabling compactions & flushes 2023-07-23 21:10:44,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:44,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:44,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. after waiting 0 ms 2023-07-23 21:10:44,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:44,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:44,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45. 2023-07-23 21:10:44,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 166563a376cb7a932e3e2754368bcf45: 2023-07-23 21:10:44,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:44,275 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=166563a376cb7a932e3e2754368bcf45, regionState=CLOSED 2023-07-23 21:10:44,275 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690146644275"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146644275"}]},"ts":"1690146644275"} 2023-07-23 21:10:44,278 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=102 2023-07-23 21:10:44,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=102, state=SUCCESS; CloseRegionProcedure 166563a376cb7a932e3e2754368bcf45, server=jenkins-hbase4.apache.org,34893,1690146629259 in 162 msec 2023-07-23 21:10:44,280 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-23 21:10:44,280 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=166563a376cb7a932e3e2754368bcf45, UNASSIGN in 168 msec 2023-07-23 21:10:44,281 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146644281"}]},"ts":"1690146644281"} 2023-07-23 21:10:44,282 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-23 21:10:44,285 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-23 21:10:44,287 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 189 msec 2023-07-23 21:10:44,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-23 21:10:44,406 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 101 completed 2023-07-23 21:10:44,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-23 21:10:44,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 21:10:44,410 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=104, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 21:10:44,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_52710846' 2023-07-23 21:10:44,411 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=104, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 21:10:44,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_52710846 2023-07-23 21:10:44,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:44,417 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:44,420 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/recovered.edits] 2023-07-23 21:10:44,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-23 21:10:44,427 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/recovered.edits/7.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45/recovered.edits/7.seqid 2023-07-23 21:10:44,427 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/GrouptestMultiTableMoveB/166563a376cb7a932e3e2754368bcf45 2023-07-23 21:10:44,427 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-23 21:10:44,431 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=104, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 21:10:44,434 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-23 21:10:44,439 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-23 21:10:44,441 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=104, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 21:10:44,441 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-23 21:10:44,441 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146644441"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:44,443 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:44,443 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 166563a376cb7a932e3e2754368bcf45, NAME => 'GrouptestMultiTableMoveB,,1690146642015.166563a376cb7a932e3e2754368bcf45.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:44,443 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-23 21:10:44,443 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146644443"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:44,444 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-23 21:10:44,451 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=104, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 21:10:44,452 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 44 msec 2023-07-23 21:10:44,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-23 21:10:44,525 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 104 completed 2023-07-23 21:10:44,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:44,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:44,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:44,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:44,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:44,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_52710846 2023-07-23 21:10:44,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:44,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:44,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:44,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:44,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:44,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34893] to rsgroup default 2023-07-23 21:10:44,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_52710846 2023-07-23 21:10:44,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:44,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_52710846, current retry=0 2023-07-23 21:10:44,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259] are moved back to Group_testMultiTableMove_52710846 2023-07-23 21:10:44,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_52710846 => default 2023-07-23 21:10:44,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_52710846 2023-07-23 21:10:44,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:44,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:44,565 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:44,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:44,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:44,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:44,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:44,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:44,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 507 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147844580, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:44,581 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:44,582 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:44,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,584 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:44,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:44,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,605 INFO [Listener at localhost/39787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=499 (was 500), OpenFileDescriptor=770 (was 769) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=504 (was 539), ProcessCount=175 (was 175), AvailableMemoryMB=6126 (was 6380) 2023-07-23 21:10:44,623 INFO [Listener at localhost/39787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=499, OpenFileDescriptor=770, MaxFileDescriptor=60000, SystemLoadAverage=504, ProcessCount=175, AvailableMemoryMB=6122 2023-07-23 21:10:44,623 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-23 21:10:44,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:44,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:44,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:44,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:44,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:44,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:44,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:44,648 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:44,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:44,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:44,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:44,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:44,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:44,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 535 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147844661, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:44,662 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:44,664 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:44,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,665 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:44,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:44,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:44,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-23 21:10:44,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 21:10:44,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:44,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:44,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:34893] to rsgroup oldGroup 2023-07-23 21:10:44,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 21:10:44,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:44,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:44,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061] are moved back to default 2023-07-23 21:10:44,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-23 21:10:44,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-23 21:10:44,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-23 21:10:44,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:44,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-23 21:10:44,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-23 21:10:44,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 21:10:44,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:10:44,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:44,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37385] to rsgroup anotherRSGroup 2023-07-23 21:10:44,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-23 21:10:44,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 21:10:44,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:10:44,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:44,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37385,1690146629650] are moved back to default 2023-07-23 21:10:44,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-23 21:10:44,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-23 21:10:44,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-23 21:10:44,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-23 21:10:44,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:44,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 569 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:56014 deadline: 1690147844730, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-23 21:10:44,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-23 21:10:44,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:44,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:56014 deadline: 1690147844733, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-23 21:10:44,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-23 21:10:44,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:44,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:56014 deadline: 1690147844734, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-23 21:10:44,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-23 21:10:44,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:44,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:56014 deadline: 1690147844736, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-23 21:10:44,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:44,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:44,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:44,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37385] to rsgroup default 2023-07-23 21:10:44,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-23 21:10:44,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 21:10:44,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:10:44,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-23 21:10:44,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37385,1690146629650] are moved back to anotherRSGroup 2023-07-23 21:10:44,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-23 21:10:44,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-23 21:10:44,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 21:10:44,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-23 21:10:44,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:44,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:44,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:44,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:44,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:34893] to rsgroup default 2023-07-23 21:10:44,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 21:10:44,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:44,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-23 21:10:44,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061] are moved back to oldGroup 2023-07-23 21:10:44,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-23 21:10:44,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-23 21:10:44,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:44,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:44,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:44,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:44,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:44,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:44,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:44,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:44,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:44,793 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:44,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:44,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:44,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:44,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:44,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:44,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 611 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147844810, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:44,811 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:44,814 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:44,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,815 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:44,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:44,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,837 INFO [Listener at localhost/39787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=503 (was 499) Potentially hanging thread: hconnection-0x724df952-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=770 (was 770), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=504 (was 504), ProcessCount=175 (was 175), AvailableMemoryMB=6108 (was 6122) 2023-07-23 21:10:44,837 WARN [Listener at localhost/39787] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-23 21:10:44,858 INFO [Listener at localhost/39787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=503, OpenFileDescriptor=770, MaxFileDescriptor=60000, SystemLoadAverage=504, ProcessCount=175, AvailableMemoryMB=6103 2023-07-23 21:10:44,858 WARN [Listener at localhost/39787] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-23 21:10:44,859 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-23 21:10:44,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:44,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:44,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:44,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:44,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:44,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:44,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:44,875 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:44,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:44,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:44,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:44,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:44,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:44,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 639 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147844890, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:44,891 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:44,893 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:44,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,895 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:44,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:44,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:44,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-23 21:10:44,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 21:10:44,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:44,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:44,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:34893] to rsgroup oldgroup 2023-07-23 21:10:44,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 21:10:44,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:44,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:44,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061] are moved back to default 2023-07-23 21:10:44,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-23 21:10:44,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:44,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:44,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:44,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-23 21:10:44,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:44,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:44,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=105, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-23 21:10:44,939 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:44,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 105 2023-07-23 21:10:44,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-23 21:10:44,941 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 21:10:44,941 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:44,942 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:44,942 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:44,947 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:44,949 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:44,949 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35 empty. 2023-07-23 21:10:44,950 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:44,950 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-23 21:10:44,967 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:44,969 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => f333c28154de4e8e257c6e5c2c5e0d35, NAME => 'testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:44,989 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:44,990 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing f333c28154de4e8e257c6e5c2c5e0d35, disabling compactions & flushes 2023-07-23 21:10:44,990 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:44,990 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:44,990 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. after waiting 0 ms 2023-07-23 21:10:44,990 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:44,990 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:44,990 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for f333c28154de4e8e257c6e5c2c5e0d35: 2023-07-23 21:10:44,994 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:44,996 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146644995"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146644995"}]},"ts":"1690146644995"} 2023-07-23 21:10:44,997 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:44,998 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:44,998 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146644998"}]},"ts":"1690146644998"} 2023-07-23 21:10:45,000 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-23 21:10:45,009 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:45,009 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:45,009 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:45,009 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:45,009 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=105, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, ASSIGN}] 2023-07-23 21:10:45,012 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=106, ppid=105, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, ASSIGN 2023-07-23 21:10:45,012 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=106, ppid=105, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:45,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-23 21:10:45,163 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:45,164 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:45,164 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146645164"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146645164"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146645164"}]},"ts":"1690146645164"} 2023-07-23 21:10:45,166 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE; OpenRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:45,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-23 21:10:45,322 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:45,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f333c28154de4e8e257c6e5c2c5e0d35, NAME => 'testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:45,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:45,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:45,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:45,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:45,328 INFO [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:45,331 DEBUG [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/tr 2023-07-23 21:10:45,331 DEBUG [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/tr 2023-07-23 21:10:45,332 INFO [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f333c28154de4e8e257c6e5c2c5e0d35 columnFamilyName tr 2023-07-23 21:10:45,333 INFO [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] regionserver.HStore(310): Store=f333c28154de4e8e257c6e5c2c5e0d35/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:45,333 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:45,334 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:45,337 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:45,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:45,340 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f333c28154de4e8e257c6e5c2c5e0d35; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11813530560, jitterRate=0.10022076964378357}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:45,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f333c28154de4e8e257c6e5c2c5e0d35: 2023-07-23 21:10:45,341 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35., pid=107, masterSystemTime=1690146645318 2023-07-23 21:10:45,343 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:45,343 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:45,343 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:45,343 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146645343"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146645343"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146645343"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146645343"}]},"ts":"1690146645343"} 2023-07-23 21:10:45,346 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-23 21:10:45,347 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; OpenRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,46093,1690146629455 in 179 msec 2023-07-23 21:10:45,349 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=105 2023-07-23 21:10:45,349 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=105, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, ASSIGN in 337 msec 2023-07-23 21:10:45,349 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:45,350 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146645350"}]},"ts":"1690146645350"} 2023-07-23 21:10:45,353 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-23 21:10:45,355 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=105, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:45,357 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, state=SUCCESS; CreateTableProcedure table=testRename in 419 msec 2023-07-23 21:10:45,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-23 21:10:45,544 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 105 completed 2023-07-23 21:10:45,544 DEBUG [Listener at localhost/39787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-23 21:10:45,544 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:45,548 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-23 21:10:45,548 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:45,548 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-23 21:10:45,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-23 21:10:45,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 21:10:45,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:45,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:45,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:45,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-23 21:10:45,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region f333c28154de4e8e257c6e5c2c5e0d35 to RSGroup oldgroup 2023-07-23 21:10:45,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:45,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:45,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:45,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:45,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:45,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=108, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, REOPEN/MOVE 2023-07-23 21:10:45,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-23 21:10:45,562 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, REOPEN/MOVE 2023-07-23 21:10:45,565 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:45,565 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146645565"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146645565"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146645565"}]},"ts":"1690146645565"} 2023-07-23 21:10:45,567 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:45,720 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:45,721 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f333c28154de4e8e257c6e5c2c5e0d35, disabling compactions & flushes 2023-07-23 21:10:45,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:45,721 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:45,721 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. after waiting 0 ms 2023-07-23 21:10:45,721 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:45,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:45,732 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:45,732 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f333c28154de4e8e257c6e5c2c5e0d35: 2023-07-23 21:10:45,733 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f333c28154de4e8e257c6e5c2c5e0d35 move to jenkins-hbase4.apache.org,34893,1690146629259 record at close sequenceid=2 2023-07-23 21:10:45,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:45,736 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=CLOSED 2023-07-23 21:10:45,736 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146645736"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146645736"}]},"ts":"1690146645736"} 2023-07-23 21:10:45,740 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-23 21:10:45,740 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,46093,1690146629455 in 171 msec 2023-07-23 21:10:45,741 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=108, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34893,1690146629259; forceNewPlan=false, retain=false 2023-07-23 21:10:45,892 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:45,892 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:45,892 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146645892"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146645892"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146645892"}]},"ts":"1690146645892"} 2023-07-23 21:10:45,894 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=108, state=RUNNABLE; OpenRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:46,051 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:46,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f333c28154de4e8e257c6e5c2c5e0d35, NAME => 'testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:46,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:46,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:46,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:46,052 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:46,053 INFO [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:46,055 DEBUG [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/tr 2023-07-23 21:10:46,055 DEBUG [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/tr 2023-07-23 21:10:46,055 INFO [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f333c28154de4e8e257c6e5c2c5e0d35 columnFamilyName tr 2023-07-23 21:10:46,056 INFO [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] regionserver.HStore(310): Store=f333c28154de4e8e257c6e5c2c5e0d35/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:46,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:46,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:46,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:46,064 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f333c28154de4e8e257c6e5c2c5e0d35; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11061730080, jitterRate=0.030203893780708313}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:46,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f333c28154de4e8e257c6e5c2c5e0d35: 2023-07-23 21:10:46,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35., pid=110, masterSystemTime=1690146646046 2023-07-23 21:10:46,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:46,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:46,069 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:46,069 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146646069"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146646069"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146646069"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146646069"}]},"ts":"1690146646069"} 2023-07-23 21:10:46,072 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=108 2023-07-23 21:10:46,072 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=108, state=SUCCESS; OpenRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,34893,1690146629259 in 177 msec 2023-07-23 21:10:46,073 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, REOPEN/MOVE in 515 msec 2023-07-23 21:10:46,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure.ProcedureSyncWait(216): waitFor pid=108 2023-07-23 21:10:46,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-23 21:10:46,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:46,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:46,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:46,569 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:46,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-23 21:10:46,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:46,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-23 21:10:46,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:46,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-23 21:10:46,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:46,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:46,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:46,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-23 21:10:46,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 21:10:46,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 21:10:46,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:46,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:46,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:10:46,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:46,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:46,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:46,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37385] to rsgroup normal 2023-07-23 21:10:46,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 21:10:46,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 21:10:46,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:46,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:46,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:10:46,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:46,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37385,1690146629650] are moved back to default 2023-07-23 21:10:46,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-23 21:10:46,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:46,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:46,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:46,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-23 21:10:46,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:46,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:46,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-23 21:10:46,598 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:46,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 111 2023-07-23 21:10:46,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-23 21:10:46,601 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 21:10:46,601 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 21:10:46,601 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:46,602 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:46,602 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:10:46,611 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:46,612 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/unmovedTable/4b41452589f00aa733370524c572da9b 2023-07-23 21:10:46,613 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/unmovedTable/4b41452589f00aa733370524c572da9b empty. 2023-07-23 21:10:46,614 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/unmovedTable/4b41452589f00aa733370524c572da9b 2023-07-23 21:10:46,614 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-23 21:10:46,648 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:46,650 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4b41452589f00aa733370524c572da9b, NAME => 'unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:46,680 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:46,680 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 4b41452589f00aa733370524c572da9b, disabling compactions & flushes 2023-07-23 21:10:46,680 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:46,680 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:46,680 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. after waiting 0 ms 2023-07-23 21:10:46,680 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:46,680 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:46,680 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 4b41452589f00aa733370524c572da9b: 2023-07-23 21:10:46,683 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:46,684 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146646683"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146646683"}]},"ts":"1690146646683"} 2023-07-23 21:10:46,685 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:46,685 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:46,686 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146646686"}]},"ts":"1690146646686"} 2023-07-23 21:10:46,687 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-23 21:10:46,690 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, ASSIGN}] 2023-07-23 21:10:46,691 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, ASSIGN 2023-07-23 21:10:46,692 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:46,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-23 21:10:46,844 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:46,844 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146646844"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146646844"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146646844"}]},"ts":"1690146646844"} 2023-07-23 21:10:46,845 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE; OpenRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:46,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-23 21:10:47,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b41452589f00aa733370524c572da9b, NAME => 'unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:47,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:47,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,003 INFO [StoreOpener-4b41452589f00aa733370524c572da9b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,004 DEBUG [StoreOpener-4b41452589f00aa733370524c572da9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/ut 2023-07-23 21:10:47,004 DEBUG [StoreOpener-4b41452589f00aa733370524c572da9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/ut 2023-07-23 21:10:47,005 INFO [StoreOpener-4b41452589f00aa733370524c572da9b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b41452589f00aa733370524c572da9b columnFamilyName ut 2023-07-23 21:10:47,006 INFO [StoreOpener-4b41452589f00aa733370524c572da9b-1] regionserver.HStore(310): Store=4b41452589f00aa733370524c572da9b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:47,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,010 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:47,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b41452589f00aa733370524c572da9b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12056751200, jitterRate=0.12287245690822601}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:47,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b41452589f00aa733370524c572da9b: 2023-07-23 21:10:47,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b., pid=113, masterSystemTime=1690146646997 2023-07-23 21:10:47,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,015 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,015 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:47,015 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146647015"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146647015"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146647015"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146647015"}]},"ts":"1690146647015"} 2023-07-23 21:10:47,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-23 21:10:47,019 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; OpenRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,46093,1690146629455 in 172 msec 2023-07-23 21:10:47,020 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-23 21:10:47,020 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, ASSIGN in 329 msec 2023-07-23 21:10:47,021 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:47,021 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146647021"}]},"ts":"1690146647021"} 2023-07-23 21:10:47,022 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-23 21:10:47,024 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:47,025 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateTableProcedure table=unmovedTable in 428 msec 2023-07-23 21:10:47,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-23 21:10:47,203 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 111 completed 2023-07-23 21:10:47,203 DEBUG [Listener at localhost/39787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-23 21:10:47,204 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:47,208 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-23 21:10:47,208 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:47,208 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-23 21:10:47,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-23 21:10:47,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 21:10:47,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 21:10:47,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:47,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:47,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:10:47,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-23 21:10:47,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region 4b41452589f00aa733370524c572da9b to RSGroup normal 2023-07-23 21:10:47,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, REOPEN/MOVE 2023-07-23 21:10:47,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-23 21:10:47,219 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, REOPEN/MOVE 2023-07-23 21:10:47,219 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:47,220 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146647219"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146647219"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146647219"}]},"ts":"1690146647219"} 2023-07-23 21:10:47,221 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:47,282 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:10:47,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b41452589f00aa733370524c572da9b, disabling compactions & flushes 2023-07-23 21:10:47,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. after waiting 0 ms 2023-07-23 21:10:47,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:47,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b41452589f00aa733370524c572da9b: 2023-07-23 21:10:47,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4b41452589f00aa733370524c572da9b move to jenkins-hbase4.apache.org,37385,1690146629650 record at close sequenceid=2 2023-07-23 21:10:47,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,388 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=CLOSED 2023-07-23 21:10:47,388 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146647387"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146647387"}]},"ts":"1690146647387"} 2023-07-23 21:10:47,391 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-23 21:10:47,391 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,46093,1690146629455 in 168 msec 2023-07-23 21:10:47,391 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37385,1690146629650; forceNewPlan=false, retain=false 2023-07-23 21:10:47,542 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:47,542 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146647542"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146647542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146647542"}]},"ts":"1690146647542"} 2023-07-23 21:10:47,544 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=114, state=RUNNABLE; OpenRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:47,712 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b41452589f00aa733370524c572da9b, NAME => 'unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:47,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:47,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,715 INFO [StoreOpener-4b41452589f00aa733370524c572da9b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,716 DEBUG [StoreOpener-4b41452589f00aa733370524c572da9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/ut 2023-07-23 21:10:47,716 DEBUG [StoreOpener-4b41452589f00aa733370524c572da9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/ut 2023-07-23 21:10:47,717 INFO [StoreOpener-4b41452589f00aa733370524c572da9b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b41452589f00aa733370524c572da9b columnFamilyName ut 2023-07-23 21:10:47,718 INFO [StoreOpener-4b41452589f00aa733370524c572da9b-1] regionserver.HStore(310): Store=4b41452589f00aa733370524c572da9b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:47,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:47,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b41452589f00aa733370524c572da9b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10091687200, jitterRate=-0.060138389468193054}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:47,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b41452589f00aa733370524c572da9b: 2023-07-23 21:10:47,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b., pid=116, masterSystemTime=1690146647697 2023-07-23 21:10:47,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,727 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:47,728 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:47,728 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146647728"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146647728"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146647728"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146647728"}]},"ts":"1690146647728"} 2023-07-23 21:10:47,731 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=114 2023-07-23 21:10:47,731 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=114, state=SUCCESS; OpenRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,37385,1690146629650 in 186 msec 2023-07-23 21:10:47,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, REOPEN/MOVE in 513 msec 2023-07-23 21:10:48,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure.ProcedureSyncWait(216): waitFor pid=114 2023-07-23 21:10:48,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-23 21:10:48,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:48,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:48,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:48,226 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:48,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-23 21:10:48,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:48,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-23 21:10:48,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:48,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-23 21:10:48,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:48,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-23 21:10:48,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 21:10:48,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:48,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:48,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 21:10:48,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-23 21:10:48,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-23 21:10:48,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:48,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:48,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-23 21:10:48,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:48,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-23 21:10:48,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:48,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-23 21:10:48,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:48,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:48,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:48,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-23 21:10:48,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 21:10:48,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:48,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:48,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 21:10:48,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:10:48,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-23 21:10:48,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region 4b41452589f00aa733370524c572da9b to RSGroup default 2023-07-23 21:10:48,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, REOPEN/MOVE 2023-07-23 21:10:48,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 21:10:48,258 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, REOPEN/MOVE 2023-07-23 21:10:48,259 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:48,259 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146648259"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146648259"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146648259"}]},"ts":"1690146648259"} 2023-07-23 21:10:48,261 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:48,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:48,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b41452589f00aa733370524c572da9b, disabling compactions & flushes 2023-07-23 21:10:48,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:48,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:48,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. after waiting 0 ms 2023-07-23 21:10:48,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:48,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:48,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:48,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b41452589f00aa733370524c572da9b: 2023-07-23 21:10:48,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4b41452589f00aa733370524c572da9b move to jenkins-hbase4.apache.org,46093,1690146629455 record at close sequenceid=5 2023-07-23 21:10:48,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:48,422 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=CLOSED 2023-07-23 21:10:48,422 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146648422"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146648422"}]},"ts":"1690146648422"} 2023-07-23 21:10:48,425 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-23 21:10:48,425 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,37385,1690146629650 in 163 msec 2023-07-23 21:10:48,426 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:48,577 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:48,577 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146648576"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146648576"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146648576"}]},"ts":"1690146648576"} 2023-07-23 21:10:48,578 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:48,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:48,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b41452589f00aa733370524c572da9b, NAME => 'unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:48,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:48,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:48,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:48,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:48,736 INFO [StoreOpener-4b41452589f00aa733370524c572da9b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:48,737 DEBUG [StoreOpener-4b41452589f00aa733370524c572da9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/ut 2023-07-23 21:10:48,737 DEBUG [StoreOpener-4b41452589f00aa733370524c572da9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/ut 2023-07-23 21:10:48,737 INFO [StoreOpener-4b41452589f00aa733370524c572da9b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b41452589f00aa733370524c572da9b columnFamilyName ut 2023-07-23 21:10:48,738 INFO [StoreOpener-4b41452589f00aa733370524c572da9b-1] regionserver.HStore(310): Store=4b41452589f00aa733370524c572da9b/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:48,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b 2023-07-23 21:10:48,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b 2023-07-23 21:10:48,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:48,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b41452589f00aa733370524c572da9b; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10859514400, jitterRate=0.01137109100818634}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:48,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b41452589f00aa733370524c572da9b: 2023-07-23 21:10:48,745 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b., pid=119, masterSystemTime=1690146648730 2023-07-23 21:10:48,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:48,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:48,746 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=4b41452589f00aa733370524c572da9b, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:48,746 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690146648746"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146648746"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146648746"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146648746"}]},"ts":"1690146648746"} 2023-07-23 21:10:48,749 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-23 21:10:48,749 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 4b41452589f00aa733370524c572da9b, server=jenkins-hbase4.apache.org,46093,1690146629455 in 170 msec 2023-07-23 21:10:48,750 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=4b41452589f00aa733370524c572da9b, REOPEN/MOVE in 492 msec 2023-07-23 21:10:49,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-23 21:10:49,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-23 21:10:49,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:49,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37385] to rsgroup default 2023-07-23 21:10:49,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 21:10:49,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:49,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:49,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 21:10:49,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:10:49,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-23 21:10:49,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37385,1690146629650] are moved back to normal 2023-07-23 21:10:49,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-23 21:10:49,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:49,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-23 21:10:49,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:49,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:49,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 21:10:49,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-23 21:10:49,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:49,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:49,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:49,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:49,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:49,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:49,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:49,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:49,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 21:10:49,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:49,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:49,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-23 21:10:49,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:49,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 21:10:49,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:49,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-23 21:10:49,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(345): Moving region f333c28154de4e8e257c6e5c2c5e0d35 to RSGroup default 2023-07-23 21:10:49,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, REOPEN/MOVE 2023-07-23 21:10:49,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 21:10:49,311 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, REOPEN/MOVE 2023-07-23 21:10:49,312 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:49,312 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146649312"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146649312"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146649312"}]},"ts":"1690146649312"} 2023-07-23 21:10:49,313 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,34893,1690146629259}] 2023-07-23 21:10:49,467 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:49,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f333c28154de4e8e257c6e5c2c5e0d35, disabling compactions & flushes 2023-07-23 21:10:49,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:49,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:49,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. after waiting 0 ms 2023-07-23 21:10:49,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:49,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 21:10:49,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:49,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f333c28154de4e8e257c6e5c2c5e0d35: 2023-07-23 21:10:49,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f333c28154de4e8e257c6e5c2c5e0d35 move to jenkins-hbase4.apache.org,37385,1690146629650 record at close sequenceid=5 2023-07-23 21:10:49,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:49,480 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=CLOSED 2023-07-23 21:10:49,480 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146649480"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146649480"}]},"ts":"1690146649480"} 2023-07-23 21:10:49,483 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-23 21:10:49,483 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,34893,1690146629259 in 168 msec 2023-07-23 21:10:49,484 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37385,1690146629650; forceNewPlan=false, retain=false 2023-07-23 21:10:49,558 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-23 21:10:49,634 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:49,635 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:49,635 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146649635"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146649635"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146649635"}]},"ts":"1690146649635"} 2023-07-23 21:10:49,636 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:49,792 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:49,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f333c28154de4e8e257c6e5c2c5e0d35, NAME => 'testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:49,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:49,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:49,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:49,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:49,795 INFO [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:49,796 DEBUG [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/tr 2023-07-23 21:10:49,796 DEBUG [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/tr 2023-07-23 21:10:49,796 INFO [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f333c28154de4e8e257c6e5c2c5e0d35 columnFamilyName tr 2023-07-23 21:10:49,797 INFO [StoreOpener-f333c28154de4e8e257c6e5c2c5e0d35-1] regionserver.HStore(310): Store=f333c28154de4e8e257c6e5c2c5e0d35/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:49,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:49,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:49,803 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:49,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f333c28154de4e8e257c6e5c2c5e0d35; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10360751360, jitterRate=-0.03507983684539795}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:49,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f333c28154de4e8e257c6e5c2c5e0d35: 2023-07-23 21:10:49,805 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35., pid=122, masterSystemTime=1690146649788 2023-07-23 21:10:49,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:49,808 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:49,808 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=f333c28154de4e8e257c6e5c2c5e0d35, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:49,808 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690146649808"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146649808"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146649808"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146649808"}]},"ts":"1690146649808"} 2023-07-23 21:10:49,813 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-23 21:10:49,813 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure f333c28154de4e8e257c6e5c2c5e0d35, server=jenkins-hbase4.apache.org,37385,1690146629650 in 174 msec 2023-07-23 21:10:49,819 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=f333c28154de4e8e257c6e5c2c5e0d35, REOPEN/MOVE in 503 msec 2023-07-23 21:10:50,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-23 21:10:50,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-23 21:10:50,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:50,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:34893] to rsgroup default 2023-07-23 21:10:50,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 21:10:50,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:50,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-23 21:10:50,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061] are moved back to newgroup 2023-07-23 21:10:50,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-23 21:10:50,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:50,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-23 21:10:50,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:50,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:50,332 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:50,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:50,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:50,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:50,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:50,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:50,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:50,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 759 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147850345, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:50,346 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:50,348 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:50,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,349 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:50,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:50,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:50,370 INFO [Listener at localhost/39787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=496 (was 503), OpenFileDescriptor=740 (was 770), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=479 (was 504), ProcessCount=173 (was 175), AvailableMemoryMB=8124 (was 6103) - AvailableMemoryMB LEAK? - 2023-07-23 21:10:50,393 INFO [Listener at localhost/39787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=496, OpenFileDescriptor=740, MaxFileDescriptor=60000, SystemLoadAverage=479, ProcessCount=173, AvailableMemoryMB=8123 2023-07-23 21:10:50,393 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-23 21:10:50,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:50,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:50,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:50,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:50,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:50,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:50,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:50,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:50,426 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:50,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:50,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:50,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:50,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:50,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:50,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:50,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 787 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147850441, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:50,442 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:50,444 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:50,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,446 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:50,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:50,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:50,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-23 21:10:50,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:50,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-23 21:10:50,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-23 21:10:50,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-23 21:10:50,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:50,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-23 21:10:50,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:50,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 799 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:56014 deadline: 1690147850456, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-23 21:10:50,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-23 21:10:50,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:50,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:56014 deadline: 1690147850458, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-23 21:10:50,461 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-23 21:10:50,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-23 21:10:50,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-23 21:10:50,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:50,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 806 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:56014 deadline: 1690147850466, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-23 21:10:50,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:50,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:50,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:50,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:50,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:50,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:50,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:50,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:50,483 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:50,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:50,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:50,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:50,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:50,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:50,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:50,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 830 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147850493, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:50,496 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:50,497 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:50,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,498 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:50,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:50,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:50,515 INFO [Listener at localhost/39787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=500 (was 496) Potentially hanging thread: hconnection-0x724df952-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xf9bb2b5-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x724df952-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=740 (was 740), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=479 (was 479), ProcessCount=173 (was 173), AvailableMemoryMB=8119 (was 8123) 2023-07-23 21:10:50,532 INFO [Listener at localhost/39787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=500, OpenFileDescriptor=740, MaxFileDescriptor=60000, SystemLoadAverage=479, ProcessCount=173, AvailableMemoryMB=8118 2023-07-23 21:10:50,533 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-23 21:10:50,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:50,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:50,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:50,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:50,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:50,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:50,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:50,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:50,548 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:50,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:50,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:50,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:50,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:50,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:50,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:50,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 858 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147850558, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:50,559 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:50,561 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:50,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,562 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:50,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:50,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:50,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:50,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:50,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1294796966 2023-07-23 21:10:50,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:50,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1294796966 2023-07-23 21:10:50,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:50,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:50,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:34893] to rsgroup Group_testDisabledTableMove_1294796966 2023-07-23 21:10:50,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1294796966 2023-07-23 21:10:50,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:50,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:50,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 21:10:50,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061] are moved back to default 2023-07-23 21:10:50,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1294796966 2023-07-23 21:10:50,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:50,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:50,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:50,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1294796966 2023-07-23 21:10:50,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:50,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:50,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-23 21:10:50,610 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:50,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 123 2023-07-23 21:10:50,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-23 21:10:50,613 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:50,614 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1294796966 2023-07-23 21:10:50,615 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:50,616 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:50,619 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:50,625 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:50,625 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:50,625 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:50,625 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:50,625 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:50,627 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d empty. 2023-07-23 21:10:50,627 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d empty. 2023-07-23 21:10:50,628 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e empty. 2023-07-23 21:10:50,628 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:50,628 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:50,628 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:50,629 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932 empty. 2023-07-23 21:10:50,629 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a empty. 2023-07-23 21:10:50,630 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:50,630 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:50,630 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-23 21:10:50,698 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:50,699 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1e365aedfec254dcff2415d13b09656a, NAME => 'Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:50,700 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => c881ac1d093b1d4e9efe5816e45a015e, NAME => 'Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:50,701 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => c05119f53b7eadcbcfde57af3de1b53d, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:50,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-23 21:10:50,750 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,750 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 1e365aedfec254dcff2415d13b09656a, disabling compactions & flushes 2023-07-23 21:10:50,750 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:50,750 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:50,750 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. after waiting 0 ms 2023-07-23 21:10:50,750 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:50,750 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:50,750 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 1e365aedfec254dcff2415d13b09656a: 2023-07-23 21:10:50,751 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => bb84a88066b2150fefdc95bd03b45932, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:50,752 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,752 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing c05119f53b7eadcbcfde57af3de1b53d, disabling compactions & flushes 2023-07-23 21:10:50,752 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:50,752 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:50,753 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. after waiting 0 ms 2023-07-23 21:10:50,753 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:50,753 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:50,753 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for c05119f53b7eadcbcfde57af3de1b53d: 2023-07-23 21:10:50,753 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => dd57c6bc102b255374365b4031d5554d, NAME => 'Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp 2023-07-23 21:10:50,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,756 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing c881ac1d093b1d4e9efe5816e45a015e, disabling compactions & flushes 2023-07-23 21:10:50,756 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:50,756 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:50,756 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. after waiting 0 ms 2023-07-23 21:10:50,756 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:50,756 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:50,756 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for c881ac1d093b1d4e9efe5816e45a015e: 2023-07-23 21:10:50,789 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,789 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing bb84a88066b2150fefdc95bd03b45932, disabling compactions & flushes 2023-07-23 21:10:50,789 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:50,789 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:50,789 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. after waiting 0 ms 2023-07-23 21:10:50,789 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:50,789 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:50,789 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for bb84a88066b2150fefdc95bd03b45932: 2023-07-23 21:10:50,790 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:50,790 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing dd57c6bc102b255374365b4031d5554d, disabling compactions & flushes 2023-07-23 21:10:50,790 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:50,790 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:50,790 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. after waiting 0 ms 2023-07-23 21:10:50,790 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:50,790 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:50,790 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for dd57c6bc102b255374365b4031d5554d: 2023-07-23 21:10:50,793 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:50,794 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650794"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650794"}]},"ts":"1690146650794"} 2023-07-23 21:10:50,795 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650794"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650794"}]},"ts":"1690146650794"} 2023-07-23 21:10:50,795 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650794"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650794"}]},"ts":"1690146650794"} 2023-07-23 21:10:50,795 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650794"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650794"}]},"ts":"1690146650794"} 2023-07-23 21:10:50,795 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650794"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146650794"}]},"ts":"1690146650794"} 2023-07-23 21:10:50,797 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-23 21:10:50,798 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:50,798 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146650798"}]},"ts":"1690146650798"} 2023-07-23 21:10:50,799 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-23 21:10:50,804 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:50,804 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:50,804 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:50,804 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:50,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e365aedfec254dcff2415d13b09656a, ASSIGN}, {pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c881ac1d093b1d4e9efe5816e45a015e, ASSIGN}, {pid=126, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c05119f53b7eadcbcfde57af3de1b53d, ASSIGN}, {pid=127, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb84a88066b2150fefdc95bd03b45932, ASSIGN}, {pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dd57c6bc102b255374365b4031d5554d, ASSIGN}] 2023-07-23 21:10:50,807 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c881ac1d093b1d4e9efe5816e45a015e, ASSIGN 2023-07-23 21:10:50,807 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb84a88066b2150fefdc95bd03b45932, ASSIGN 2023-07-23 21:10:50,807 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c05119f53b7eadcbcfde57af3de1b53d, ASSIGN 2023-07-23 21:10:50,808 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e365aedfec254dcff2415d13b09656a, ASSIGN 2023-07-23 21:10:50,808 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=125, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c881ac1d093b1d4e9efe5816e45a015e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:50,808 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dd57c6bc102b255374365b4031d5554d, ASSIGN 2023-07-23 21:10:50,808 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb84a88066b2150fefdc95bd03b45932, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37385,1690146629650; forceNewPlan=false, retain=false 2023-07-23 21:10:50,809 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c05119f53b7eadcbcfde57af3de1b53d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37385,1690146629650; forceNewPlan=false, retain=false 2023-07-23 21:10:50,809 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e365aedfec254dcff2415d13b09656a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:50,810 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=128, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dd57c6bc102b255374365b4031d5554d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46093,1690146629455; forceNewPlan=false, retain=false 2023-07-23 21:10:50,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-23 21:10:50,959 INFO [jenkins-hbase4:46113] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 21:10:50,962 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=bb84a88066b2150fefdc95bd03b45932, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:50,962 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c05119f53b7eadcbcfde57af3de1b53d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:50,962 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650962"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650962"}]},"ts":"1690146650962"} 2023-07-23 21:10:50,962 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=125 updating hbase:meta row=c881ac1d093b1d4e9efe5816e45a015e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:50,962 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=1e365aedfec254dcff2415d13b09656a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:50,962 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650962"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650962"}]},"ts":"1690146650962"} 2023-07-23 21:10:50,963 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650962"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650962"}]},"ts":"1690146650962"} 2023-07-23 21:10:50,962 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=dd57c6bc102b255374365b4031d5554d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:50,962 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146650962"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650962"}]},"ts":"1690146650962"} 2023-07-23 21:10:50,963 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146650962"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146650962"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146650962"}]},"ts":"1690146650962"} 2023-07-23 21:10:50,964 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=129, ppid=127, state=RUNNABLE; OpenRegionProcedure bb84a88066b2150fefdc95bd03b45932, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:50,965 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=125, state=RUNNABLE; OpenRegionProcedure c881ac1d093b1d4e9efe5816e45a015e, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:50,966 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=124, state=RUNNABLE; OpenRegionProcedure 1e365aedfec254dcff2415d13b09656a, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:50,967 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=126, state=RUNNABLE; OpenRegionProcedure c05119f53b7eadcbcfde57af3de1b53d, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:50,969 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=128, state=RUNNABLE; OpenRegionProcedure dd57c6bc102b255374365b4031d5554d, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:51,120 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bb84a88066b2150fefdc95bd03b45932, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 21:10:51,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dd57c6bc102b255374365b4031d5554d, NAME => 'Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 21:10:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:51,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,123 INFO [StoreOpener-bb84a88066b2150fefdc95bd03b45932-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,123 INFO [StoreOpener-dd57c6bc102b255374365b4031d5554d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,125 DEBUG [StoreOpener-dd57c6bc102b255374365b4031d5554d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d/f 2023-07-23 21:10:51,125 DEBUG [StoreOpener-dd57c6bc102b255374365b4031d5554d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d/f 2023-07-23 21:10:51,125 INFO [StoreOpener-dd57c6bc102b255374365b4031d5554d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dd57c6bc102b255374365b4031d5554d columnFamilyName f 2023-07-23 21:10:51,125 DEBUG [StoreOpener-bb84a88066b2150fefdc95bd03b45932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932/f 2023-07-23 21:10:51,125 DEBUG [StoreOpener-bb84a88066b2150fefdc95bd03b45932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932/f 2023-07-23 21:10:51,126 INFO [StoreOpener-dd57c6bc102b255374365b4031d5554d-1] regionserver.HStore(310): Store=dd57c6bc102b255374365b4031d5554d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:51,126 INFO [StoreOpener-bb84a88066b2150fefdc95bd03b45932-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bb84a88066b2150fefdc95bd03b45932 columnFamilyName f 2023-07-23 21:10:51,127 INFO [StoreOpener-bb84a88066b2150fefdc95bd03b45932-1] regionserver.HStore(310): Store=bb84a88066b2150fefdc95bd03b45932/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:51,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,131 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:51,135 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bb84a88066b2150fefdc95bd03b45932; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11101822080, jitterRate=0.03393775224685669}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:51,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bb84a88066b2150fefdc95bd03b45932: 2023-07-23 21:10:51,136 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932., pid=129, masterSystemTime=1690146651116 2023-07-23 21:10:51,137 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:51,137 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:51,138 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:51,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c05119f53b7eadcbcfde57af3de1b53d, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 21:10:51,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:51,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,139 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=bb84a88066b2150fefdc95bd03b45932, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:51,139 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146651139"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146651139"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146651139"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146651139"}]},"ts":"1690146651139"} 2023-07-23 21:10:51,140 INFO [StoreOpener-c05119f53b7eadcbcfde57af3de1b53d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,142 DEBUG [StoreOpener-c05119f53b7eadcbcfde57af3de1b53d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d/f 2023-07-23 21:10:51,142 DEBUG [StoreOpener-c05119f53b7eadcbcfde57af3de1b53d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d/f 2023-07-23 21:10:51,142 INFO [StoreOpener-c05119f53b7eadcbcfde57af3de1b53d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c05119f53b7eadcbcfde57af3de1b53d columnFamilyName f 2023-07-23 21:10:51,143 INFO [StoreOpener-c05119f53b7eadcbcfde57af3de1b53d-1] regionserver.HStore(310): Store=c05119f53b7eadcbcfde57af3de1b53d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:51,144 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=129, resume processing ppid=127 2023-07-23 21:10:51,144 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, ppid=127, state=SUCCESS; OpenRegionProcedure bb84a88066b2150fefdc95bd03b45932, server=jenkins-hbase4.apache.org,37385,1690146629650 in 177 msec 2023-07-23 21:10:51,145 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb84a88066b2150fefdc95bd03b45932, ASSIGN in 339 msec 2023-07-23 21:10:51,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:51,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:51,156 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dd57c6bc102b255374365b4031d5554d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9834273440, jitterRate=-0.08411191403865814}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:51,156 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c05119f53b7eadcbcfde57af3de1b53d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10280132000, jitterRate=-0.04258809983730316}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:51,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dd57c6bc102b255374365b4031d5554d: 2023-07-23 21:10:51,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c05119f53b7eadcbcfde57af3de1b53d: 2023-07-23 21:10:51,157 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d., pid=133, masterSystemTime=1690146651117 2023-07-23 21:10:51,157 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d., pid=132, masterSystemTime=1690146651116 2023-07-23 21:10:51,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:51,158 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:51,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:51,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c881ac1d093b1d4e9efe5816e45a015e, NAME => 'Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 21:10:51,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:51,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,160 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=128 updating hbase:meta row=dd57c6bc102b255374365b4031d5554d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:51,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:51,160 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146651160"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146651160"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146651160"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146651160"}]},"ts":"1690146651160"} 2023-07-23 21:10:51,161 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:51,161 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=c05119f53b7eadcbcfde57af3de1b53d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:51,161 INFO [StoreOpener-c881ac1d093b1d4e9efe5816e45a015e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,161 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146651161"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146651161"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146651161"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146651161"}]},"ts":"1690146651161"} 2023-07-23 21:10:51,163 DEBUG [StoreOpener-c881ac1d093b1d4e9efe5816e45a015e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e/f 2023-07-23 21:10:51,163 DEBUG [StoreOpener-c881ac1d093b1d4e9efe5816e45a015e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e/f 2023-07-23 21:10:51,164 INFO [StoreOpener-c881ac1d093b1d4e9efe5816e45a015e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c881ac1d093b1d4e9efe5816e45a015e columnFamilyName f 2023-07-23 21:10:51,165 INFO [StoreOpener-c881ac1d093b1d4e9efe5816e45a015e-1] regionserver.HStore(310): Store=c881ac1d093b1d4e9efe5816e45a015e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:51,166 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=128 2023-07-23 21:10:51,166 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=128, state=SUCCESS; OpenRegionProcedure dd57c6bc102b255374365b4031d5554d, server=jenkins-hbase4.apache.org,46093,1690146629455 in 194 msec 2023-07-23 21:10:51,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,166 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=126 2023-07-23 21:10:51,167 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=126, state=SUCCESS; OpenRegionProcedure c05119f53b7eadcbcfde57af3de1b53d, server=jenkins-hbase4.apache.org,37385,1690146629650 in 197 msec 2023-07-23 21:10:51,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,168 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dd57c6bc102b255374365b4031d5554d, ASSIGN in 361 msec 2023-07-23 21:10:51,168 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c05119f53b7eadcbcfde57af3de1b53d, ASSIGN in 361 msec 2023-07-23 21:10:51,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:51,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c881ac1d093b1d4e9efe5816e45a015e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9425001600, jitterRate=-0.12222832441329956}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:51,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c881ac1d093b1d4e9efe5816e45a015e: 2023-07-23 21:10:51,174 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e., pid=130, masterSystemTime=1690146651117 2023-07-23 21:10:51,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:51,175 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:51,175 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:51,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1e365aedfec254dcff2415d13b09656a, NAME => 'Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 21:10:51,176 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=125 updating hbase:meta row=c881ac1d093b1d4e9efe5816e45a015e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:51,176 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146651175"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146651175"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146651175"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146651175"}]},"ts":"1690146651175"} 2023-07-23 21:10:51,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:51,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,177 INFO [StoreOpener-1e365aedfec254dcff2415d13b09656a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,179 DEBUG [StoreOpener-1e365aedfec254dcff2415d13b09656a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a/f 2023-07-23 21:10:51,179 DEBUG [StoreOpener-1e365aedfec254dcff2415d13b09656a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a/f 2023-07-23 21:10:51,179 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=125 2023-07-23 21:10:51,179 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=125, state=SUCCESS; OpenRegionProcedure c881ac1d093b1d4e9efe5816e45a015e, server=jenkins-hbase4.apache.org,46093,1690146629455 in 212 msec 2023-07-23 21:10:51,180 INFO [StoreOpener-1e365aedfec254dcff2415d13b09656a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1e365aedfec254dcff2415d13b09656a columnFamilyName f 2023-07-23 21:10:51,180 INFO [StoreOpener-1e365aedfec254dcff2415d13b09656a-1] regionserver.HStore(310): Store=1e365aedfec254dcff2415d13b09656a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:51,180 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c881ac1d093b1d4e9efe5816e45a015e, ASSIGN in 374 msec 2023-07-23 21:10:51,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:51,187 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1e365aedfec254dcff2415d13b09656a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9611902720, jitterRate=-0.10482180118560791}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:51,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1e365aedfec254dcff2415d13b09656a: 2023-07-23 21:10:51,187 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a., pid=131, masterSystemTime=1690146651117 2023-07-23 21:10:51,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:51,189 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:51,189 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=1e365aedfec254dcff2415d13b09656a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:51,189 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146651189"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146651189"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146651189"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146651189"}]},"ts":"1690146651189"} 2023-07-23 21:10:51,192 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=124 2023-07-23 21:10:51,192 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=124, state=SUCCESS; OpenRegionProcedure 1e365aedfec254dcff2415d13b09656a, server=jenkins-hbase4.apache.org,46093,1690146629455 in 224 msec 2023-07-23 21:10:51,193 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-23 21:10:51,194 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e365aedfec254dcff2415d13b09656a, ASSIGN in 387 msec 2023-07-23 21:10:51,194 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:51,194 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146651194"}]},"ts":"1690146651194"} 2023-07-23 21:10:51,195 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-23 21:10:51,197 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:51,199 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 590 msec 2023-07-23 21:10:51,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-23 21:10:51,220 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 123 completed 2023-07-23 21:10:51,221 DEBUG [Listener at localhost/39787] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-23 21:10:51,221 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:51,224 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-23 21:10:51,224 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:51,224 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-23 21:10:51,225 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:51,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-23 21:10:51,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:51,232 INFO [Listener at localhost/39787] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-23 21:10:51,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-23 21:10:51,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=134, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-23 21:10:51,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=134 2023-07-23 21:10:51,235 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146651235"}]},"ts":"1690146651235"} 2023-07-23 21:10:51,237 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-23 21:10:51,238 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-23 21:10:51,239 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e365aedfec254dcff2415d13b09656a, UNASSIGN}, {pid=136, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c881ac1d093b1d4e9efe5816e45a015e, UNASSIGN}, {pid=137, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c05119f53b7eadcbcfde57af3de1b53d, UNASSIGN}, {pid=138, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb84a88066b2150fefdc95bd03b45932, UNASSIGN}, {pid=139, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dd57c6bc102b255374365b4031d5554d, UNASSIGN}] 2023-07-23 21:10:51,240 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c881ac1d093b1d4e9efe5816e45a015e, UNASSIGN 2023-07-23 21:10:51,241 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e365aedfec254dcff2415d13b09656a, UNASSIGN 2023-07-23 21:10:51,241 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb84a88066b2150fefdc95bd03b45932, UNASSIGN 2023-07-23 21:10:51,241 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c05119f53b7eadcbcfde57af3de1b53d, UNASSIGN 2023-07-23 21:10:51,241 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=134, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dd57c6bc102b255374365b4031d5554d, UNASSIGN 2023-07-23 21:10:51,241 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=c881ac1d093b1d4e9efe5816e45a015e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:51,241 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=1e365aedfec254dcff2415d13b09656a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:51,241 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146651241"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146651241"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146651241"}]},"ts":"1690146651241"} 2023-07-23 21:10:51,242 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146651241"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146651241"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146651241"}]},"ts":"1690146651241"} 2023-07-23 21:10:51,242 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=bb84a88066b2150fefdc95bd03b45932, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:51,242 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=c05119f53b7eadcbcfde57af3de1b53d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:51,242 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146651242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146651242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146651242"}]},"ts":"1690146651242"} 2023-07-23 21:10:51,242 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146651242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146651242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146651242"}]},"ts":"1690146651242"} 2023-07-23 21:10:51,242 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=dd57c6bc102b255374365b4031d5554d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:51,243 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146651242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146651242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146651242"}]},"ts":"1690146651242"} 2023-07-23 21:10:51,244 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=136, state=RUNNABLE; CloseRegionProcedure c881ac1d093b1d4e9efe5816e45a015e, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:51,244 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=135, state=RUNNABLE; CloseRegionProcedure 1e365aedfec254dcff2415d13b09656a, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:51,245 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=138, state=RUNNABLE; CloseRegionProcedure bb84a88066b2150fefdc95bd03b45932, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:51,246 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=137, state=RUNNABLE; CloseRegionProcedure c05119f53b7eadcbcfde57af3de1b53d, server=jenkins-hbase4.apache.org,37385,1690146629650}] 2023-07-23 21:10:51,247 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=139, state=RUNNABLE; CloseRegionProcedure dd57c6bc102b255374365b4031d5554d, server=jenkins-hbase4.apache.org,46093,1690146629455}] 2023-07-23 21:10:51,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=134 2023-07-23 21:10:51,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bb84a88066b2150fefdc95bd03b45932, disabling compactions & flushes 2023-07-23 21:10:51,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:51,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:51,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. after waiting 0 ms 2023-07-23 21:10:51,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:51,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1e365aedfec254dcff2415d13b09656a, disabling compactions & flushes 2023-07-23 21:10:51,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:51,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:51,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. after waiting 0 ms 2023-07-23 21:10:51,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:51,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:51,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932. 2023-07-23 21:10:51,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bb84a88066b2150fefdc95bd03b45932: 2023-07-23 21:10:51,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:51,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a. 2023-07-23 21:10:51,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1e365aedfec254dcff2415d13b09656a: 2023-07-23 21:10:51,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c05119f53b7eadcbcfde57af3de1b53d, disabling compactions & flushes 2023-07-23 21:10:51,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:51,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:51,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. after waiting 0 ms 2023-07-23 21:10:51,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:51,411 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=bb84a88066b2150fefdc95bd03b45932, regionState=CLOSED 2023-07-23 21:10:51,411 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146651411"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146651411"}]},"ts":"1690146651411"} 2023-07-23 21:10:51,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c881ac1d093b1d4e9efe5816e45a015e, disabling compactions & flushes 2023-07-23 21:10:51,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:51,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:51,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. after waiting 0 ms 2023-07-23 21:10:51,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:51,414 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=1e365aedfec254dcff2415d13b09656a, regionState=CLOSED 2023-07-23 21:10:51,414 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146651414"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146651414"}]},"ts":"1690146651414"} 2023-07-23 21:10:51,420 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=138 2023-07-23 21:10:51,420 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=138, state=SUCCESS; CloseRegionProcedure bb84a88066b2150fefdc95bd03b45932, server=jenkins-hbase4.apache.org,37385,1690146629650 in 172 msec 2023-07-23 21:10:51,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:51,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:51,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d. 2023-07-23 21:10:51,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c05119f53b7eadcbcfde57af3de1b53d: 2023-07-23 21:10:51,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e. 2023-07-23 21:10:51,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c881ac1d093b1d4e9efe5816e45a015e: 2023-07-23 21:10:51,423 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=135 2023-07-23 21:10:51,423 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=135, state=SUCCESS; CloseRegionProcedure 1e365aedfec254dcff2415d13b09656a, server=jenkins-hbase4.apache.org,46093,1690146629455 in 174 msec 2023-07-23 21:10:51,423 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb84a88066b2150fefdc95bd03b45932, UNASSIGN in 181 msec 2023-07-23 21:10:51,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dd57c6bc102b255374365b4031d5554d, disabling compactions & flushes 2023-07-23 21:10:51,426 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:51,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:51,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. after waiting 0 ms 2023-07-23 21:10:51,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:51,426 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1e365aedfec254dcff2415d13b09656a, UNASSIGN in 184 msec 2023-07-23 21:10:51,426 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=c881ac1d093b1d4e9efe5816e45a015e, regionState=CLOSED 2023-07-23 21:10:51,426 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146651426"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146651426"}]},"ts":"1690146651426"} 2023-07-23 21:10:51,427 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,431 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=c05119f53b7eadcbcfde57af3de1b53d, regionState=CLOSED 2023-07-23 21:10:51,432 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690146651431"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146651431"}]},"ts":"1690146651431"} 2023-07-23 21:10:51,435 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=136 2023-07-23 21:10:51,435 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=136, state=SUCCESS; CloseRegionProcedure c881ac1d093b1d4e9efe5816e45a015e, server=jenkins-hbase4.apache.org,46093,1690146629455 in 188 msec 2023-07-23 21:10:51,437 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=137 2023-07-23 21:10:51,437 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=137, state=SUCCESS; CloseRegionProcedure c05119f53b7eadcbcfde57af3de1b53d, server=jenkins-hbase4.apache.org,37385,1690146629650 in 188 msec 2023-07-23 21:10:51,438 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c881ac1d093b1d4e9efe5816e45a015e, UNASSIGN in 196 msec 2023-07-23 21:10:51,439 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c05119f53b7eadcbcfde57af3de1b53d, UNASSIGN in 198 msec 2023-07-23 21:10:51,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:51,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d. 2023-07-23 21:10:51,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dd57c6bc102b255374365b4031d5554d: 2023-07-23 21:10:51,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,442 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=dd57c6bc102b255374365b4031d5554d, regionState=CLOSED 2023-07-23 21:10:51,442 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690146651442"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146651442"}]},"ts":"1690146651442"} 2023-07-23 21:10:51,446 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=139 2023-07-23 21:10:51,446 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=139, state=SUCCESS; CloseRegionProcedure dd57c6bc102b255374365b4031d5554d, server=jenkins-hbase4.apache.org,46093,1690146629455 in 197 msec 2023-07-23 21:10:51,448 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=134 2023-07-23 21:10:51,448 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=dd57c6bc102b255374365b4031d5554d, UNASSIGN in 207 msec 2023-07-23 21:10:51,449 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146651449"}]},"ts":"1690146651449"} 2023-07-23 21:10:51,450 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-23 21:10:51,452 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-23 21:10:51,454 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 221 msec 2023-07-23 21:10:51,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=134 2023-07-23 21:10:51,538 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 134 completed 2023-07-23 21:10:51,538 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1294796966 2023-07-23 21:10:51,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1294796966 2023-07-23 21:10:51,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1294796966 2023-07-23 21:10:51,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:51,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-23 21:10:51,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1294796966, current retry=0 2023-07-23 21:10:51,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1294796966. 2023-07-23 21:10:51,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:51,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-23 21:10:51,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:10:51,550 INFO [Listener at localhost/39787] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-23 21:10:51,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-23 21:10:51,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:51,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 918 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:56014 deadline: 1690146711551, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-23 21:10:51,552 DEBUG [Listener at localhost/39787] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-23 21:10:51,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-23 21:10:51,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 21:10:51,556 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=146, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 21:10:51,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1294796966' 2023-07-23 21:10:51,556 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=146, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 21:10:51,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1294796966 2023-07-23 21:10:51,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:51,564 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,564 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,564 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,564 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,564 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,566 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d/recovered.edits] 2023-07-23 21:10:51,567 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932/recovered.edits] 2023-07-23 21:10:51,567 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a/recovered.edits] 2023-07-23 21:10:51,567 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d/recovered.edits] 2023-07-23 21:10:51,567 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e/f, FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e/recovered.edits] 2023-07-23 21:10:51,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-23 21:10:51,577 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d/recovered.edits/4.seqid 2023-07-23 21:10:51,579 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e/recovered.edits/4.seqid 2023-07-23 21:10:51,580 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c05119f53b7eadcbcfde57af3de1b53d 2023-07-23 21:10:51,580 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/c881ac1d093b1d4e9efe5816e45a015e 2023-07-23 21:10:51,580 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a/recovered.edits/4.seqid 2023-07-23 21:10:51,581 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d/recovered.edits/4.seqid 2023-07-23 21:10:51,582 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932/recovered.edits/4.seqid to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/archive/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932/recovered.edits/4.seqid 2023-07-23 21:10:51,582 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/1e365aedfec254dcff2415d13b09656a 2023-07-23 21:10:51,582 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/dd57c6bc102b255374365b4031d5554d 2023-07-23 21:10:51,583 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/.tmp/data/default/Group_testDisabledTableMove/bb84a88066b2150fefdc95bd03b45932 2023-07-23 21:10:51,583 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-23 21:10:51,586 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=146, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 21:10:51,589 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-23 21:10:51,596 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-23 21:10:51,599 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=146, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 21:10:51,599 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-23 21:10:51,599 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651599"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,599 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651599"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,599 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651599"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,600 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651599"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,600 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146651599"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,602 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-23 21:10:51,602 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1e365aedfec254dcff2415d13b09656a, NAME => 'Group_testDisabledTableMove,,1690146650607.1e365aedfec254dcff2415d13b09656a.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => c881ac1d093b1d4e9efe5816e45a015e, NAME => 'Group_testDisabledTableMove,aaaaa,1690146650607.c881ac1d093b1d4e9efe5816e45a015e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => c05119f53b7eadcbcfde57af3de1b53d, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690146650607.c05119f53b7eadcbcfde57af3de1b53d.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => bb84a88066b2150fefdc95bd03b45932, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690146650607.bb84a88066b2150fefdc95bd03b45932.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => dd57c6bc102b255374365b4031d5554d, NAME => 'Group_testDisabledTableMove,zzzzz,1690146650607.dd57c6bc102b255374365b4031d5554d.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-23 21:10:51,603 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-23 21:10:51,603 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146651603"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:51,605 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-23 21:10:51,607 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=146, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 21:10:51,609 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 54 msec 2023-07-23 21:10:51,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-23 21:10:51,669 INFO [Listener at localhost/39787] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-23 21:10:51,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:51,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:51,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:51,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:34893] to rsgroup default 2023-07-23 21:10:51,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1294796966 2023-07-23 21:10:51,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:10:51,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1294796966, current retry=0 2023-07-23 21:10:51,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34893,1690146629259, jenkins-hbase4.apache.org,35321,1690146633061] are moved back to Group_testDisabledTableMove_1294796966 2023-07-23 21:10:51,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1294796966 => default 2023-07-23 21:10:51,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:51,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1294796966 2023-07-23 21:10:51,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:10:51,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:51,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:51,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:51,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:51,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:51,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:51,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:51,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:51,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:51,715 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:51,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:51,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:51,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:51,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:51,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:51,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 952 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147851724, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:51,725 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:51,726 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:51,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,727 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:51,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:51,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:51,745 INFO [Listener at localhost/39787] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=502 (was 500) Potentially hanging thread: hconnection-0x724df952-shared-pool-21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a5e2fc3-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_756688116_17 at /127.0.0.1:51480 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-951838729_17 at /127.0.0.1:49434 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=766 (was 740) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=479 (was 479), ProcessCount=173 (was 173), AvailableMemoryMB=8083 (was 8118) 2023-07-23 21:10:51,746 WARN [Listener at localhost/39787] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-23 21:10:51,761 INFO [Listener at localhost/39787] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=502, OpenFileDescriptor=766, MaxFileDescriptor=60000, SystemLoadAverage=479, ProcessCount=173, AvailableMemoryMB=8082 2023-07-23 21:10:51,761 WARN [Listener at localhost/39787] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-23 21:10:51,761 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-23 21:10:51,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:10:51,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:10:51,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:10:51,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:10:51,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:10:51,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:10:51,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:10:51,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:10:51,775 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:10:51,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:10:51,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:51,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:10:51,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:10:51,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:10:51,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46113] to rsgroup master 2023-07-23 21:10:51,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:10:51,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] ipc.CallRunner(144): callId: 980 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:56014 deadline: 1690147851788, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. 2023-07-23 21:10:51,789 WARN [Listener at localhost/39787] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46113 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:10:51,791 INFO [Listener at localhost/39787] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:51,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:51,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:51,792 INFO [Listener at localhost/39787] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34893, jenkins-hbase4.apache.org:35321, jenkins-hbase4.apache.org:37385, jenkins-hbase4.apache.org:46093], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:10:51,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:10:51,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46113] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:10:51,793 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 21:10:51,793 INFO [Listener at localhost/39787] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 21:10:51,793 DEBUG [Listener at localhost/39787] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x518a774a to 127.0.0.1:59206 2023-07-23 21:10:51,793 DEBUG [Listener at localhost/39787] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:51,794 DEBUG [Listener at localhost/39787] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 21:10:51,794 DEBUG [Listener at localhost/39787] util.JVMClusterUtil(257): Found active master hash=2100289760, stopped=false 2023-07-23 21:10:51,794 DEBUG [Listener at localhost/39787] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:10:51,795 DEBUG [Listener at localhost/39787] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:10:51,795 INFO [Listener at localhost/39787] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:51,797 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:51,797 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:51,797 INFO [Listener at localhost/39787] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 21:10:51,797 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:51,797 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:51,797 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:51,797 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:51,798 DEBUG [Listener at localhost/39787] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x41aea838 to 127.0.0.1:59206 2023-07-23 21:10:51,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:51,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:51,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:51,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:51,798 DEBUG [Listener at localhost/39787] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:51,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:51,798 INFO [Listener at localhost/39787] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34893,1690146629259' ***** 2023-07-23 21:10:51,798 INFO [Listener at localhost/39787] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:51,799 INFO [Listener at localhost/39787] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46093,1690146629455' ***** 2023-07-23 21:10:51,799 INFO [Listener at localhost/39787] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:51,799 INFO [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:51,799 INFO [Listener at localhost/39787] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37385,1690146629650' ***** 2023-07-23 21:10:51,799 INFO [Listener at localhost/39787] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:51,799 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:51,799 INFO [Listener at localhost/39787] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35321,1690146633061' ***** 2023-07-23 21:10:51,799 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:51,799 INFO [Listener at localhost/39787] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:51,801 INFO [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:51,815 INFO [RS:2;jenkins-hbase4:37385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2f709731{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:51,815 INFO [RS:0;jenkins-hbase4:34893] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6770849c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:51,815 INFO [RS:1;jenkins-hbase4:46093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@46ffcd75{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:51,815 INFO [RS:3;jenkins-hbase4:35321] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4f3a0b5c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:51,819 INFO [RS:2;jenkins-hbase4:37385] server.AbstractConnector(383): Stopped ServerConnector@206a46fd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:51,819 INFO [RS:1;jenkins-hbase4:46093] server.AbstractConnector(383): Stopped ServerConnector@7f6e0343{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:51,819 INFO [RS:2;jenkins-hbase4:37385] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:51,819 INFO [RS:3;jenkins-hbase4:35321] server.AbstractConnector(383): Stopped ServerConnector@3feecd6d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:51,819 INFO [RS:0;jenkins-hbase4:34893] server.AbstractConnector(383): Stopped ServerConnector@24b5075d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:51,820 INFO [RS:2;jenkins-hbase4:37385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5109bb49{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:51,820 INFO [RS:3;jenkins-hbase4:35321] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:51,820 INFO [RS:1;jenkins-hbase4:46093] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:51,821 INFO [RS:2;jenkins-hbase4:37385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67c48ba9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:51,821 INFO [RS:0;jenkins-hbase4:34893] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:51,823 INFO [RS:1;jenkins-hbase4:46093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2c8a8cb2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:51,822 INFO [RS:3;jenkins-hbase4:35321] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2c431029{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:51,824 INFO [RS:0;jenkins-hbase4:34893] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@282c1c14{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:51,824 INFO [RS:1;jenkins-hbase4:46093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@539fa719{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:51,825 INFO [RS:3;jenkins-hbase4:35321] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@533f7cd2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:51,825 INFO [RS:0;jenkins-hbase4:34893] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10630bfe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:52,008 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:52,008 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:52,008 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:52,008 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:52,008 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:52,008 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:52,010 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:52,010 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:52,015 INFO [RS:2;jenkins-hbase4:37385] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:52,016 INFO [RS:2;jenkins-hbase4:37385] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:52,017 INFO [RS:2;jenkins-hbase4:37385] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:52,017 INFO [RS:3;jenkins-hbase4:35321] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:52,017 INFO [RS:3;jenkins-hbase4:35321] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:52,017 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(3305): Received CLOSE for f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:52,017 INFO [RS:3;jenkins-hbase4:35321] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:52,017 INFO [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:52,018 INFO [RS:0;jenkins-hbase4:34893] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:52,018 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:52,018 INFO [RS:0;jenkins-hbase4:34893] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:52,019 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f333c28154de4e8e257c6e5c2c5e0d35, disabling compactions & flushes 2023-07-23 21:10:52,019 DEBUG [RS:3;jenkins-hbase4:35321] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3850d5ef to 127.0.0.1:59206 2023-07-23 21:10:52,019 INFO [RS:0;jenkins-hbase4:34893] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:52,019 INFO [RS:1;jenkins-hbase4:46093] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:52,019 INFO [RS:1;jenkins-hbase4:46093] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:52,019 INFO [RS:1;jenkins-hbase4:46093] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:52,019 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:52,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:52,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. after waiting 0 ms 2023-07-23 21:10:52,020 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:52,019 DEBUG [RS:2;jenkins-hbase4:37385] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x62bfe45b to 127.0.0.1:59206 2023-07-23 21:10:52,020 DEBUG [RS:2;jenkins-hbase4:37385] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:52,020 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 21:10:52,020 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1478): Online Regions={f333c28154de4e8e257c6e5c2c5e0d35=testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35.} 2023-07-23 21:10:52,021 DEBUG [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1504): Waiting on f333c28154de4e8e257c6e5c2c5e0d35 2023-07-23 21:10:52,020 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(3305): Received CLOSE for 4b41452589f00aa733370524c572da9b 2023-07-23 21:10:52,021 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(3305): Received CLOSE for 044211867ef276b1af97934dff65ac35 2023-07-23 21:10:52,021 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(3305): Received CLOSE for f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:52,019 INFO [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:52,019 DEBUG [RS:3;jenkins-hbase4:35321] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:52,022 DEBUG [RS:0;jenkins-hbase4:34893] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x660e7f25 to 127.0.0.1:59206 2023-07-23 21:10:52,022 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:52,022 DEBUG [RS:1;jenkins-hbase4:46093] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7a0f2748 to 127.0.0.1:59206 2023-07-23 21:10:52,022 DEBUG [RS:1;jenkins-hbase4:46093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:52,022 INFO [RS:1;jenkins-hbase4:46093] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:52,022 INFO [RS:1;jenkins-hbase4:46093] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:52,022 INFO [RS:1;jenkins-hbase4:46093] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:52,022 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 21:10:52,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b41452589f00aa733370524c572da9b, disabling compactions & flushes 2023-07-23 21:10:52,022 DEBUG [RS:0;jenkins-hbase4:34893] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:52,023 INFO [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34893,1690146629259; all regions closed. 2023-07-23 21:10:52,022 INFO [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35321,1690146633061; all regions closed. 2023-07-23 21:10:52,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:52,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:52,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. after waiting 0 ms 2023-07-23 21:10:52,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:52,030 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-23 21:10:52,030 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 4b41452589f00aa733370524c572da9b=unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b., 044211867ef276b1af97934dff65ac35=hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35., f2fe29390f399eae0a4221056d0e01bd=hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd.} 2023-07-23 21:10:52,031 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1504): Waiting on 044211867ef276b1af97934dff65ac35, 1588230740, 4b41452589f00aa733370524c572da9b, f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:52,031 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:10:52,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:10:52,031 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:10:52,031 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:10:52,032 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:10:52,034 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=76.40 KB heapSize=120.38 KB 2023-07-23 21:10:52,051 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/testRename/f333c28154de4e8e257c6e5c2c5e0d35/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 21:10:52,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:52,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f333c28154de4e8e257c6e5c2c5e0d35: 2023-07-23 21:10:52,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690146644935.f333c28154de4e8e257c6e5c2c5e0d35. 2023-07-23 21:10:52,064 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/default/unmovedTable/4b41452589f00aa733370524c572da9b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 21:10:52,066 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:52,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b41452589f00aa733370524c572da9b: 2023-07-23 21:10:52,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690146646595.4b41452589f00aa733370524c572da9b. 2023-07-23 21:10:52,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 044211867ef276b1af97934dff65ac35, disabling compactions & flushes 2023-07-23 21:10:52,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:52,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:52,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. after waiting 0 ms 2023-07-23 21:10:52,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:52,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 044211867ef276b1af97934dff65ac35 1/1 column families, dataSize=28.43 KB heapSize=46.69 KB 2023-07-23 21:10:52,079 DEBUG [RS:0;jenkins-hbase4:34893] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs 2023-07-23 21:10:52,079 INFO [RS:0;jenkins-hbase4:34893] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34893%2C1690146629259:(num 1690146631576) 2023-07-23 21:10:52,079 DEBUG [RS:0;jenkins-hbase4:34893] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:52,080 INFO [RS:0;jenkins-hbase4:34893] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:52,080 DEBUG [RS:3;jenkins-hbase4:35321] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs 2023-07-23 21:10:52,080 INFO [RS:3;jenkins-hbase4:35321] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35321%2C1690146633061:(num 1690146633470) 2023-07-23 21:10:52,080 DEBUG [RS:3;jenkins-hbase4:35321] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:52,080 INFO [RS:3;jenkins-hbase4:35321] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:52,080 INFO [RS:0;jenkins-hbase4:34893] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:52,081 INFO [RS:0;jenkins-hbase4:34893] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:52,081 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:52,081 INFO [RS:0;jenkins-hbase4:34893] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:52,081 INFO [RS:3;jenkins-hbase4:35321] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:52,081 INFO [RS:0;jenkins-hbase4:34893] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:52,082 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:52,082 INFO [RS:3;jenkins-hbase4:35321] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:52,082 INFO [RS:3;jenkins-hbase4:35321] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:52,083 INFO [RS:0;jenkins-hbase4:34893] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34893 2023-07-23 21:10:52,083 INFO [RS:3;jenkins-hbase4:35321] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:52,090 INFO [RS:3;jenkins-hbase4:35321] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35321 2023-07-23 21:10:52,092 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:52,092 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:52,092 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:52,092 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:52,092 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:52,092 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:52,092 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:52,093 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:52,092 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35321,1690146633061 2023-07-23 21:10:52,093 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:52,093 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:52,093 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:52,093 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34893,1690146629259 2023-07-23 21:10:52,093 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34893,1690146629259] 2023-07-23 21:10:52,094 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34893,1690146629259; numProcessing=1 2023-07-23 21:10:52,103 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34893,1690146629259 already deleted, retry=false 2023-07-23 21:10:52,103 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34893,1690146629259 expired; onlineServers=3 2023-07-23 21:10:52,103 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35321,1690146633061] 2023-07-23 21:10:52,103 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35321,1690146633061; numProcessing=2 2023-07-23 21:10:52,155 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.43 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35/.tmp/m/077f2b598b25433499f523011c150dcc 2023-07-23 21:10:52,159 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=70.42 KB at sequenceid=196 (bloomFilter=false), to=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/.tmp/info/684e6fa1ded544c08568435c7c180347 2023-07-23 21:10:52,189 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 077f2b598b25433499f523011c150dcc 2023-07-23 21:10:52,189 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 684e6fa1ded544c08568435c7c180347 2023-07-23 21:10:52,195 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,195 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:34893-0x10194055df50001, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,195 INFO [RS:0;jenkins-hbase4:34893] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34893,1690146629259; zookeeper connection closed. 2023-07-23 21:10:52,196 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@15cd549b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@15cd549b 2023-07-23 21:10:52,197 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35321,1690146633061 already deleted, retry=false 2023-07-23 21:10:52,197 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35321,1690146633061 expired; onlineServers=2 2023-07-23 21:10:52,208 INFO [RS:3;jenkins-hbase4:35321] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35321,1690146633061; zookeeper connection closed. 2023-07-23 21:10:52,209 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,209 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:35321-0x10194055df5000b, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,209 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2d9bb90b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2d9bb90b 2023-07-23 21:10:52,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35/.tmp/m/077f2b598b25433499f523011c150dcc as hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35/m/077f2b598b25433499f523011c150dcc 2023-07-23 21:10:52,221 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37385,1690146629650; all regions closed. 2023-07-23 21:10:52,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 077f2b598b25433499f523011c150dcc 2023-07-23 21:10:52,222 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35/m/077f2b598b25433499f523011c150dcc, entries=28, sequenceid=95, filesize=6.1 K 2023-07-23 21:10:52,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.43 KB/29116, heapSize ~46.67 KB/47792, currentSize=0 B/0 for 044211867ef276b1af97934dff65ac35 in 156ms, sequenceid=95, compaction requested=false 2023-07-23 21:10:52,231 DEBUG [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1504): Waiting on 044211867ef276b1af97934dff65ac35, 1588230740, f2fe29390f399eae0a4221056d0e01bd 2023-07-23 21:10:52,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/rsgroup/044211867ef276b1af97934dff65ac35/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-23 21:10:52,254 DEBUG [RS:2;jenkins-hbase4:37385] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs 2023-07-23 21:10:52,254 INFO [RS:2;jenkins-hbase4:37385] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37385%2C1690146629650:(num 1690146631576) 2023-07-23 21:10:52,254 DEBUG [RS:2;jenkins-hbase4:37385] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:52,254 INFO [RS:2;jenkins-hbase4:37385] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:52,254 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:10:52,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:52,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 044211867ef276b1af97934dff65ac35: 2023-07-23 21:10:52,259 INFO [RS:2;jenkins-hbase4:37385] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:52,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690146632299.044211867ef276b1af97934dff65ac35. 2023-07-23 21:10:52,259 INFO [RS:2;jenkins-hbase4:37385] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:52,259 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:52,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f2fe29390f399eae0a4221056d0e01bd, disabling compactions & flushes 2023-07-23 21:10:52,259 INFO [RS:2;jenkins-hbase4:37385] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:52,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:52,259 INFO [RS:2;jenkins-hbase4:37385] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:52,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:52,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. after waiting 0 ms 2023-07-23 21:10:52,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:52,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f2fe29390f399eae0a4221056d0e01bd 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-23 21:10:52,260 INFO [RS:2;jenkins-hbase4:37385] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37385 2023-07-23 21:10:52,262 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:52,262 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:52,262 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37385,1690146629650 2023-07-23 21:10:52,263 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37385,1690146629650] 2023-07-23 21:10:52,263 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37385,1690146629650; numProcessing=3 2023-07-23 21:10:52,265 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=196 (bloomFilter=false), to=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/.tmp/rep_barrier/b12172240bc2474dbef4f64c5fe20775 2023-07-23 21:10:52,267 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37385,1690146629650 already deleted, retry=false 2023-07-23 21:10:52,267 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37385,1690146629650 expired; onlineServers=1 2023-07-23 21:10:52,272 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b12172240bc2474dbef4f64c5fe20775 2023-07-23 21:10:52,274 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd/.tmp/info/5ff9e301645d4b6c884638bbcda93527 2023-07-23 21:10:52,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd/.tmp/info/5ff9e301645d4b6c884638bbcda93527 as hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd/info/5ff9e301645d4b6c884638bbcda93527 2023-07-23 21:10:52,286 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.99 KB at sequenceid=196 (bloomFilter=false), to=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/.tmp/table/7e48d27644e74c53a2c947b7f45cbce7 2023-07-23 21:10:52,290 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd/info/5ff9e301645d4b6c884638bbcda93527, entries=2, sequenceid=6, filesize=4.8 K 2023-07-23 21:10:52,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for f2fe29390f399eae0a4221056d0e01bd in 31ms, sequenceid=6, compaction requested=false 2023-07-23 21:10:52,295 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7e48d27644e74c53a2c947b7f45cbce7 2023-07-23 21:10:52,296 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/.tmp/info/684e6fa1ded544c08568435c7c180347 as hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/info/684e6fa1ded544c08568435c7c180347 2023-07-23 21:10:52,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/namespace/f2fe29390f399eae0a4221056d0e01bd/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-23 21:10:52,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:52,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f2fe29390f399eae0a4221056d0e01bd: 2023-07-23 21:10:52,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690146632065.f2fe29390f399eae0a4221056d0e01bd. 2023-07-23 21:10:52,303 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 684e6fa1ded544c08568435c7c180347 2023-07-23 21:10:52,303 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/info/684e6fa1ded544c08568435c7c180347, entries=92, sequenceid=196, filesize=15.3 K 2023-07-23 21:10:52,305 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/.tmp/rep_barrier/b12172240bc2474dbef4f64c5fe20775 as hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/rep_barrier/b12172240bc2474dbef4f64c5fe20775 2023-07-23 21:10:52,311 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b12172240bc2474dbef4f64c5fe20775 2023-07-23 21:10:52,311 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/rep_barrier/b12172240bc2474dbef4f64c5fe20775, entries=18, sequenceid=196, filesize=6.9 K 2023-07-23 21:10:52,312 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/.tmp/table/7e48d27644e74c53a2c947b7f45cbce7 as hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/table/7e48d27644e74c53a2c947b7f45cbce7 2023-07-23 21:10:52,318 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7e48d27644e74c53a2c947b7f45cbce7 2023-07-23 21:10:52,319 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/table/7e48d27644e74c53a2c947b7f45cbce7, entries=31, sequenceid=196, filesize=7.4 K 2023-07-23 21:10:52,319 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~76.40 KB/78237, heapSize ~120.34 KB/123224, currentSize=0 B/0 for 1588230740 in 285ms, sequenceid=196, compaction requested=false 2023-07-23 21:10:52,327 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/data/hbase/meta/1588230740/recovered.edits/199.seqid, newMaxSeqId=199, maxSeqId=1 2023-07-23 21:10:52,328 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:10:52,328 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:10:52,328 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:10:52,328 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 21:10:52,431 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46093,1690146629455; all regions closed. 2023-07-23 21:10:52,438 DEBUG [RS:1;jenkins-hbase4:46093] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs 2023-07-23 21:10:52,438 INFO [RS:1;jenkins-hbase4:46093] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46093%2C1690146629455.meta:.meta(num 1690146631790) 2023-07-23 21:10:52,445 DEBUG [RS:1;jenkins-hbase4:46093] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/oldWALs 2023-07-23 21:10:52,445 INFO [RS:1;jenkins-hbase4:46093] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46093%2C1690146629455:(num 1690146631576) 2023-07-23 21:10:52,445 DEBUG [RS:1;jenkins-hbase4:46093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:52,445 INFO [RS:1;jenkins-hbase4:46093] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:52,445 INFO [RS:1;jenkins-hbase4:46093] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:52,446 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:52,446 INFO [RS:1;jenkins-hbase4:46093] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46093 2023-07-23 21:10:52,451 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:52,451 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46093,1690146629455 2023-07-23 21:10:52,453 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46093,1690146629455] 2023-07-23 21:10:52,453 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46093,1690146629455; numProcessing=4 2023-07-23 21:10:52,454 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46093,1690146629455 already deleted, retry=false 2023-07-23 21:10:52,454 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46093,1690146629455 expired; onlineServers=0 2023-07-23 21:10:52,454 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46113,1690146627323' ***** 2023-07-23 21:10:52,454 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 21:10:52,455 DEBUG [M:0;jenkins-hbase4:46113] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c12eadb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:52,455 INFO [M:0;jenkins-hbase4:46113] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:52,457 INFO [M:0;jenkins-hbase4:46113] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@55ffcf1a{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:10:52,458 INFO [M:0;jenkins-hbase4:46113] server.AbstractConnector(383): Stopped ServerConnector@2092751{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:52,458 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:52,458 INFO [M:0;jenkins-hbase4:46113] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:52,458 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:52,458 INFO [M:0;jenkins-hbase4:46113] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34fd62ed{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:52,459 INFO [M:0;jenkins-hbase4:46113] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7410039f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:52,459 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:52,459 INFO [M:0;jenkins-hbase4:46113] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46113,1690146627323 2023-07-23 21:10:52,459 INFO [M:0;jenkins-hbase4:46113] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46113,1690146627323; all regions closed. 2023-07-23 21:10:52,459 DEBUG [M:0;jenkins-hbase4:46113] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:52,459 INFO [M:0;jenkins-hbase4:46113] master.HMaster(1491): Stopping master jetty server 2023-07-23 21:10:52,460 INFO [M:0;jenkins-hbase4:46113] server.AbstractConnector(383): Stopped ServerConnector@87ecd56{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:52,461 DEBUG [M:0;jenkins-hbase4:46113] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 21:10:52,461 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 21:10:52,461 DEBUG [M:0;jenkins-hbase4:46113] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 21:10:52,461 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146631175] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146631175,5,FailOnTimeoutGroup] 2023-07-23 21:10:52,461 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146631172] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146631172,5,FailOnTimeoutGroup] 2023-07-23 21:10:52,461 INFO [M:0;jenkins-hbase4:46113] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 21:10:52,461 INFO [M:0;jenkins-hbase4:46113] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 21:10:52,461 INFO [M:0;jenkins-hbase4:46113] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-23 21:10:52,461 DEBUG [M:0;jenkins-hbase4:46113] master.HMaster(1512): Stopping service threads 2023-07-23 21:10:52,461 INFO [M:0;jenkins-hbase4:46113] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 21:10:52,462 ERROR [M:0;jenkins-hbase4:46113] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-23 21:10:52,462 INFO [M:0;jenkins-hbase4:46113] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 21:10:52,463 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 21:10:52,463 DEBUG [M:0;jenkins-hbase4:46113] zookeeper.ZKUtil(398): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 21:10:52,463 WARN [M:0;jenkins-hbase4:46113] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 21:10:52,463 INFO [M:0;jenkins-hbase4:46113] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 21:10:52,463 INFO [M:0;jenkins-hbase4:46113] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 21:10:52,463 DEBUG [M:0;jenkins-hbase4:46113] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:10:52,463 INFO [M:0;jenkins-hbase4:46113] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:52,464 DEBUG [M:0;jenkins-hbase4:46113] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:52,464 DEBUG [M:0;jenkins-hbase4:46113] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:10:52,464 DEBUG [M:0;jenkins-hbase4:46113] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:52,464 INFO [M:0;jenkins-hbase4:46113] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=490.22 KB heapSize=586.09 KB 2023-07-23 21:10:52,482 INFO [M:0;jenkins-hbase4:46113] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=490.22 KB at sequenceid=1080 (bloomFilter=true), to=hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9954b332c52345b98ace75ca563e97f2 2023-07-23 21:10:52,489 DEBUG [M:0;jenkins-hbase4:46113] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9954b332c52345b98ace75ca563e97f2 as hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9954b332c52345b98ace75ca563e97f2 2023-07-23 21:10:52,494 INFO [M:0;jenkins-hbase4:46113] regionserver.HStore(1080): Added hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9954b332c52345b98ace75ca563e97f2, entries=145, sequenceid=1080, filesize=25.7 K 2023-07-23 21:10:52,495 INFO [M:0;jenkins-hbase4:46113] regionserver.HRegion(2948): Finished flush of dataSize ~490.22 KB/501982, heapSize ~586.07 KB/600136, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=1080, compaction requested=false 2023-07-23 21:10:52,496 INFO [M:0;jenkins-hbase4:46113] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:52,497 DEBUG [M:0;jenkins-hbase4:46113] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:10:52,500 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:52,500 INFO [M:0;jenkins-hbase4:46113] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 21:10:52,500 INFO [M:0;jenkins-hbase4:46113] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46113 2023-07-23 21:10:52,502 DEBUG [M:0;jenkins-hbase4:46113] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46113,1690146627323 already deleted, retry=false 2023-07-23 21:10:52,513 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 21:10:52,710 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,710 INFO [M:0;jenkins-hbase4:46113] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46113,1690146627323; zookeeper connection closed. 2023-07-23 21:10:52,710 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): master:46113-0x10194055df50000, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,810 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,810 INFO [RS:1;jenkins-hbase4:46093] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46093,1690146629455; zookeeper connection closed. 2023-07-23 21:10:52,810 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:46093-0x10194055df50002, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,810 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2f551f68] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2f551f68 2023-07-23 21:10:52,910 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,910 INFO [RS:2;jenkins-hbase4:37385] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37385,1690146629650; zookeeper connection closed. 2023-07-23 21:10:52,910 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): regionserver:37385-0x10194055df50003, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:52,913 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@423e01bf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@423e01bf 2023-07-23 21:10:52,914 INFO [Listener at localhost/39787] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-23 21:10:52,914 WARN [Listener at localhost/39787] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:10:52,918 WARN [BP-404994070-172.31.14.131-1690146623480 heartbeating to localhost/127.0.0.1:46635] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-404994070-172.31.14.131-1690146623480 (Datanode Uuid 948374e2-cd86-40c5-bf32-7f54f38f83f4) service to localhost/127.0.0.1:46635 2023-07-23 21:10:52,920 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data5/current/BP-404994070-172.31.14.131-1690146623480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:52,920 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data6/current/BP-404994070-172.31.14.131-1690146623480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:52,925 INFO [Listener at localhost/39787] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:10:53,031 WARN [Listener at localhost/39787] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:10:53,033 INFO [Listener at localhost/39787] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:10:53,137 WARN [BP-404994070-172.31.14.131-1690146623480 heartbeating to localhost/127.0.0.1:46635] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:10:53,137 WARN [BP-404994070-172.31.14.131-1690146623480 heartbeating to localhost/127.0.0.1:46635] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-404994070-172.31.14.131-1690146623480 (Datanode Uuid a91c3a38-048b-499b-8a33-9d8067b682fa) service to localhost/127.0.0.1:46635 2023-07-23 21:10:53,138 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data3/current/BP-404994070-172.31.14.131-1690146623480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:53,138 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data4/current/BP-404994070-172.31.14.131-1690146623480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:53,139 WARN [Listener at localhost/39787] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:10:53,144 INFO [Listener at localhost/39787] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:10:53,248 WARN [BP-404994070-172.31.14.131-1690146623480 heartbeating to localhost/127.0.0.1:46635] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:10:53,248 WARN [BP-404994070-172.31.14.131-1690146623480 heartbeating to localhost/127.0.0.1:46635] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-404994070-172.31.14.131-1690146623480 (Datanode Uuid 0d41bf30-2479-45be-a34b-ccd64f7ddc57) service to localhost/127.0.0.1:46635 2023-07-23 21:10:53,248 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data1/current/BP-404994070-172.31.14.131-1690146623480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:53,249 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/cluster_54da2311-c62c-6b72-bc7f-9628b7b66b37/dfs/data/data2/current/BP-404994070-172.31.14.131-1690146623480] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:53,300 INFO [Listener at localhost/39787] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:10:53,328 INFO [Listener at localhost/39787] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 21:10:53,377 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-23 21:10:53,378 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 21:10:53,378 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.log.dir so I do NOT create it in target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662 2023-07-23 21:10:53,378 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c5487a24-9522-a4c8-9e02-102e2bf245fd/hadoop.tmp.dir so I do NOT create it in target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662 2023-07-23 21:10:53,378 INFO [Listener at localhost/39787] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602, deleteOnExit=true 2023-07-23 21:10:53,378 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 21:10:53,378 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/test.cache.data in system properties and HBase conf 2023-07-23 21:10:53,378 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 21:10:53,378 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir in system properties and HBase conf 2023-07-23 21:10:53,379 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 21:10:53,379 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 21:10:53,379 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 21:10:53,379 DEBUG [Listener at localhost/39787] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 21:10:53,379 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 21:10:53,379 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 21:10:53,379 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/nfs.dump.dir in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/java.io.tmpdir in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 21:10:53,380 INFO [Listener at localhost/39787] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 21:10:53,385 WARN [Listener at localhost/39787] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 21:10:53,385 WARN [Listener at localhost/39787] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 21:10:53,417 DEBUG [Listener at localhost/39787-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10194055df5000a, quorum=127.0.0.1:59206, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-23 21:10:53,417 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10194055df5000a, quorum=127.0.0.1:59206, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-23 21:10:53,464 WARN [Listener at localhost/39787] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:53,467 INFO [Listener at localhost/39787] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:53,472 INFO [Listener at localhost/39787] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/java.io.tmpdir/Jetty_localhost_42187_hdfs____.31msv8/webapp 2023-07-23 21:10:53,566 INFO [Listener at localhost/39787] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42187 2023-07-23 21:10:53,571 WARN [Listener at localhost/39787] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 21:10:53,571 WARN [Listener at localhost/39787] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 21:10:53,613 WARN [Listener at localhost/39917] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:53,630 WARN [Listener at localhost/39917] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:53,633 WARN [Listener at localhost/39917] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:53,634 INFO [Listener at localhost/39917] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:53,643 INFO [Listener at localhost/39917] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/java.io.tmpdir/Jetty_localhost_45341_datanode____7f8euu/webapp 2023-07-23 21:10:53,738 INFO [Listener at localhost/39917] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45341 2023-07-23 21:10:53,745 WARN [Listener at localhost/44439] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:53,771 WARN [Listener at localhost/44439] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:53,774 WARN [Listener at localhost/44439] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:53,776 INFO [Listener at localhost/44439] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:53,784 INFO [Listener at localhost/44439] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/java.io.tmpdir/Jetty_localhost_46291_datanode____39ui27/webapp 2023-07-23 21:10:53,869 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7ced047cedef06d6: Processing first storage report for DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79 from datanode 330256fe-b3ac-4fa5-a8c4-f16d2b14ad89 2023-07-23 21:10:53,869 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7ced047cedef06d6: from storage DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79 node DatanodeRegistration(127.0.0.1:36693, datanodeUuid=330256fe-b3ac-4fa5-a8c4-f16d2b14ad89, infoPort=44201, infoSecurePort=0, ipcPort=44439, storageInfo=lv=-57;cid=testClusterID;nsid=1861953189;c=1690146653387), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:53,869 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7ced047cedef06d6: Processing first storage report for DS-e4b45d32-8a0a-46bf-8660-9f8a4d860e3a from datanode 330256fe-b3ac-4fa5-a8c4-f16d2b14ad89 2023-07-23 21:10:53,869 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7ced047cedef06d6: from storage DS-e4b45d32-8a0a-46bf-8660-9f8a4d860e3a node DatanodeRegistration(127.0.0.1:36693, datanodeUuid=330256fe-b3ac-4fa5-a8c4-f16d2b14ad89, infoPort=44201, infoSecurePort=0, ipcPort=44439, storageInfo=lv=-57;cid=testClusterID;nsid=1861953189;c=1690146653387), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:53,910 INFO [Listener at localhost/44439] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46291 2023-07-23 21:10:53,917 WARN [Listener at localhost/45789] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:53,936 WARN [Listener at localhost/45789] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:53,938 WARN [Listener at localhost/45789] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:53,939 INFO [Listener at localhost/45789] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:53,945 INFO [Listener at localhost/45789] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/java.io.tmpdir/Jetty_localhost_46681_datanode____.up5goc/webapp 2023-07-23 21:10:54,052 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x809ede66ac244fb5: Processing first storage report for DS-652909dd-d134-4d14-90e7-e12341832a4b from datanode a31f855b-dd01-47bd-944a-8731bc5d1293 2023-07-23 21:10:54,052 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x809ede66ac244fb5: from storage DS-652909dd-d134-4d14-90e7-e12341832a4b node DatanodeRegistration(127.0.0.1:43401, datanodeUuid=a31f855b-dd01-47bd-944a-8731bc5d1293, infoPort=43593, infoSecurePort=0, ipcPort=45789, storageInfo=lv=-57;cid=testClusterID;nsid=1861953189;c=1690146653387), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:54,052 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x809ede66ac244fb5: Processing first storage report for DS-f79ab37e-86fe-4d99-aaa3-1eccdebba2a6 from datanode a31f855b-dd01-47bd-944a-8731bc5d1293 2023-07-23 21:10:54,052 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x809ede66ac244fb5: from storage DS-f79ab37e-86fe-4d99-aaa3-1eccdebba2a6 node DatanodeRegistration(127.0.0.1:43401, datanodeUuid=a31f855b-dd01-47bd-944a-8731bc5d1293, infoPort=43593, infoSecurePort=0, ipcPort=45789, storageInfo=lv=-57;cid=testClusterID;nsid=1861953189;c=1690146653387), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:54,074 INFO [Listener at localhost/45789] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46681 2023-07-23 21:10:54,086 WARN [Listener at localhost/44181] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:54,212 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe37488df7aca68f7: Processing first storage report for DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56 from datanode fa73450e-494d-470b-961a-686d307f3688 2023-07-23 21:10:54,212 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe37488df7aca68f7: from storage DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56 node DatanodeRegistration(127.0.0.1:38061, datanodeUuid=fa73450e-494d-470b-961a-686d307f3688, infoPort=41733, infoSecurePort=0, ipcPort=44181, storageInfo=lv=-57;cid=testClusterID;nsid=1861953189;c=1690146653387), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:54,212 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe37488df7aca68f7: Processing first storage report for DS-6f6e5087-f71f-4cd1-8392-3cafe0697e59 from datanode fa73450e-494d-470b-961a-686d307f3688 2023-07-23 21:10:54,212 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe37488df7aca68f7: from storage DS-6f6e5087-f71f-4cd1-8392-3cafe0697e59 node DatanodeRegistration(127.0.0.1:38061, datanodeUuid=fa73450e-494d-470b-961a-686d307f3688, infoPort=41733, infoSecurePort=0, ipcPort=44181, storageInfo=lv=-57;cid=testClusterID;nsid=1861953189;c=1690146653387), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:54,299 DEBUG [Listener at localhost/44181] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662 2023-07-23 21:10:54,300 DEBUG [Listener at localhost/44181] zookeeper.MiniZooKeeperCluster(243): Failed binding ZK Server to client port: 50824 java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:461) at sun.nio.ch.Net.bind(Net.java:453) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:222) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:78) at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:687) at org.apache.zookeeper.server.ServerCnxnFactory.configure(ServerCnxnFactory.java:76) at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:239) at org.apache.hadoop.hbase.HBaseZKTestingUtility.startMiniZKCluster(HBaseZKTestingUtility.java:129) at org.apache.hadoop.hbase.HBaseZKTestingUtility.startMiniZKCluster(HBaseZKTestingUtility.java:102) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1090) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1048) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.toggleQuotaCheckAndRestartMiniCluster(TestRSGroupsAdmin1.java:492) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.testRSGroupListDoesNotContainFailedTableCreation(TestRSGroupsAdmin1.java:410) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) 2023-07-23 21:10:54,302 INFO [Listener at localhost/44181] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602/zookeeper_0, clientPort=50825, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 21:10:54,304 INFO [Listener at localhost/44181] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50825 2023-07-23 21:10:54,304 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,306 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,354 INFO [Listener at localhost/44181] util.FSUtils(471): Created version file at hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874 with version=8 2023-07-23 21:10:54,354 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/hbase-staging 2023-07-23 21:10:54,355 DEBUG [Listener at localhost/44181] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 21:10:54,355 DEBUG [Listener at localhost/44181] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 21:10:54,356 DEBUG [Listener at localhost/44181] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 21:10:54,356 DEBUG [Listener at localhost/44181] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 21:10:54,357 INFO [Listener at localhost/44181] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:54,357 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,357 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,357 INFO [Listener at localhost/44181] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:54,357 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,357 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:54,357 INFO [Listener at localhost/44181] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:54,359 INFO [Listener at localhost/44181] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37565 2023-07-23 21:10:54,359 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,360 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,361 INFO [Listener at localhost/44181] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37565 connecting to ZooKeeper ensemble=127.0.0.1:50825 2023-07-23 21:10:54,369 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:375650x0, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:54,369 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37565-0x1019405cb610000 connected 2023-07-23 21:10:54,388 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:54,389 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:54,389 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:54,390 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37565 2023-07-23 21:10:54,391 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37565 2023-07-23 21:10:54,398 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37565 2023-07-23 21:10:54,399 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37565 2023-07-23 21:10:54,399 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37565 2023-07-23 21:10:54,401 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:54,401 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:54,401 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:54,402 INFO [Listener at localhost/44181] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 21:10:54,402 INFO [Listener at localhost/44181] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:54,402 INFO [Listener at localhost/44181] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:54,402 INFO [Listener at localhost/44181] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:54,403 INFO [Listener at localhost/44181] http.HttpServer(1146): Jetty bound to port 36319 2023-07-23 21:10:54,403 INFO [Listener at localhost/44181] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:54,405 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,405 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@70dcb28a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:54,406 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,406 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3d7341f1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:54,540 INFO [Listener at localhost/44181] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:54,542 INFO [Listener at localhost/44181] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:54,542 INFO [Listener at localhost/44181] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:54,542 INFO [Listener at localhost/44181] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:54,546 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,548 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7cc32677{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/java.io.tmpdir/jetty-0_0_0_0-36319-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8007536507869432581/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:10:54,549 INFO [Listener at localhost/44181] server.AbstractConnector(333): Started ServerConnector@5922bff8{HTTP/1.1, (http/1.1)}{0.0.0.0:36319} 2023-07-23 21:10:54,549 INFO [Listener at localhost/44181] server.Server(415): Started @32983ms 2023-07-23 21:10:54,550 INFO [Listener at localhost/44181] master.HMaster(444): hbase.rootdir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874, hbase.cluster.distributed=false 2023-07-23 21:10:54,571 INFO [Listener at localhost/44181] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:54,571 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,571 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,571 INFO [Listener at localhost/44181] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:54,572 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,572 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:54,572 INFO [Listener at localhost/44181] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:54,574 INFO [Listener at localhost/44181] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41449 2023-07-23 21:10:54,574 INFO [Listener at localhost/44181] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:54,576 DEBUG [Listener at localhost/44181] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:54,576 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,578 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,579 INFO [Listener at localhost/44181] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41449 connecting to ZooKeeper ensemble=127.0.0.1:50825 2023-07-23 21:10:54,582 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:414490x0, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:54,583 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): regionserver:414490x0, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:54,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41449-0x1019405cb610001 connected 2023-07-23 21:10:54,584 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:54,585 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:54,589 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41449 2023-07-23 21:10:54,589 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41449 2023-07-23 21:10:54,590 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41449 2023-07-23 21:10:54,591 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41449 2023-07-23 21:10:54,591 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41449 2023-07-23 21:10:54,594 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:54,594 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:54,594 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:54,595 INFO [Listener at localhost/44181] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:54,595 INFO [Listener at localhost/44181] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:54,595 INFO [Listener at localhost/44181] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:54,595 INFO [Listener at localhost/44181] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:54,597 INFO [Listener at localhost/44181] http.HttpServer(1146): Jetty bound to port 39547 2023-07-23 21:10:54,597 INFO [Listener at localhost/44181] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:54,611 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,612 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6aae7eec{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:54,612 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,612 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@51f99f2c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:54,733 INFO [Listener at localhost/44181] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:54,734 INFO [Listener at localhost/44181] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:54,734 INFO [Listener at localhost/44181] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:54,734 INFO [Listener at localhost/44181] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:54,735 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,736 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@582e1291{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/java.io.tmpdir/jetty-0_0_0_0-39547-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8902309093682210670/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:54,737 INFO [Listener at localhost/44181] server.AbstractConnector(333): Started ServerConnector@14730114{HTTP/1.1, (http/1.1)}{0.0.0.0:39547} 2023-07-23 21:10:54,737 INFO [Listener at localhost/44181] server.Server(415): Started @33171ms 2023-07-23 21:10:54,749 INFO [Listener at localhost/44181] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:54,749 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,749 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,749 INFO [Listener at localhost/44181] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:54,749 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,749 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:54,749 INFO [Listener at localhost/44181] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:54,752 INFO [Listener at localhost/44181] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38003 2023-07-23 21:10:54,752 INFO [Listener at localhost/44181] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:54,753 DEBUG [Listener at localhost/44181] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:54,753 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,754 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,755 INFO [Listener at localhost/44181] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38003 connecting to ZooKeeper ensemble=127.0.0.1:50825 2023-07-23 21:10:54,759 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:380030x0, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:54,760 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38003-0x1019405cb610002 connected 2023-07-23 21:10:54,760 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:54,760 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:54,761 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:54,762 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38003 2023-07-23 21:10:54,763 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38003 2023-07-23 21:10:54,771 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38003 2023-07-23 21:10:54,773 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38003 2023-07-23 21:10:54,773 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38003 2023-07-23 21:10:54,775 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:54,776 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:54,776 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:54,776 INFO [Listener at localhost/44181] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:54,777 INFO [Listener at localhost/44181] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:54,777 INFO [Listener at localhost/44181] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:54,777 INFO [Listener at localhost/44181] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:54,778 INFO [Listener at localhost/44181] http.HttpServer(1146): Jetty bound to port 42263 2023-07-23 21:10:54,778 INFO [Listener at localhost/44181] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:54,780 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,780 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@404c009{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:54,780 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,781 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@199af640{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:54,895 INFO [Listener at localhost/44181] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:54,896 INFO [Listener at localhost/44181] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:54,896 INFO [Listener at localhost/44181] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:54,896 INFO [Listener at localhost/44181] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:10:54,897 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,898 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@25612b5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/java.io.tmpdir/jetty-0_0_0_0-42263-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8465190506555008817/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:54,899 INFO [Listener at localhost/44181] server.AbstractConnector(333): Started ServerConnector@601eee72{HTTP/1.1, (http/1.1)}{0.0.0.0:42263} 2023-07-23 21:10:54,899 INFO [Listener at localhost/44181] server.Server(415): Started @33333ms 2023-07-23 21:10:54,910 INFO [Listener at localhost/44181] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:54,911 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,911 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,911 INFO [Listener at localhost/44181] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:54,911 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:54,911 INFO [Listener at localhost/44181] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:54,911 INFO [Listener at localhost/44181] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:54,914 INFO [Listener at localhost/44181] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37991 2023-07-23 21:10:54,914 INFO [Listener at localhost/44181] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:10:54,915 DEBUG [Listener at localhost/44181] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:10:54,916 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,917 INFO [Listener at localhost/44181] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:54,918 INFO [Listener at localhost/44181] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37991 connecting to ZooKeeper ensemble=127.0.0.1:50825 2023-07-23 21:10:54,924 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:379910x0, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:54,925 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): regionserver:379910x0, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:54,926 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): regionserver:379910x0, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:54,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37991-0x1019405cb610003 connected 2023-07-23 21:10:54,927 DEBUG [Listener at localhost/44181] zookeeper.ZKUtil(164): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:54,930 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37991 2023-07-23 21:10:54,931 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37991 2023-07-23 21:10:54,946 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37991 2023-07-23 21:10:54,947 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37991 2023-07-23 21:10:54,947 DEBUG [Listener at localhost/44181] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37991 2023-07-23 21:10:54,949 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:54,949 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:54,949 INFO [Listener at localhost/44181] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:54,950 INFO [Listener at localhost/44181] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:10:54,950 INFO [Listener at localhost/44181] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:54,950 INFO [Listener at localhost/44181] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:54,950 INFO [Listener at localhost/44181] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:54,951 INFO [Listener at localhost/44181] http.HttpServer(1146): Jetty bound to port 44775 2023-07-23 21:10:54,951 INFO [Listener at localhost/44181] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:54,955 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,955 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ee09bda{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:54,956 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:54,956 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@76e8f8f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:10:55,068 INFO [Listener at localhost/44181] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:10:55,069 INFO [Listener at localhost/44181] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:10:55,069 INFO [Listener at localhost/44181] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:10:55,069 INFO [Listener at localhost/44181] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:10:55,070 INFO [Listener at localhost/44181] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:55,071 INFO [Listener at localhost/44181] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@20fbf257{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/java.io.tmpdir/jetty-0_0_0_0-44775-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3974154232262546023/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:55,072 INFO [Listener at localhost/44181] server.AbstractConnector(333): Started ServerConnector@2d986ee8{HTTP/1.1, (http/1.1)}{0.0.0.0:44775} 2023-07-23 21:10:55,072 INFO [Listener at localhost/44181] server.Server(415): Started @33506ms 2023-07-23 21:10:55,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:55,081 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7528cdf{HTTP/1.1, (http/1.1)}{0.0.0.0:44961} 2023-07-23 21:10:55,081 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @33515ms 2023-07-23 21:10:55,081 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:55,084 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:10:55,085 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:55,086 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:55,086 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:55,086 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:55,086 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:55,088 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:55,089 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:10:55,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37565,1690146654356 from backup master directory 2023-07-23 21:10:55,090 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:10:55,091 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:55,091 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:10:55,091 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:55,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:55,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/hbase.id with ID: 4a6bb142-d6e3-4ef8-847c-6a6982d16794 2023-07-23 21:10:55,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:55,121 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:55,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1e176928 to 127.0.0.1:50825 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:55,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7074b01c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:55,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:55,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 21:10:55,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:55,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/data/master/store-tmp 2023-07-23 21:10:55,156 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:55,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:10:55,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:55,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:55,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:10:55,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:55,157 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:55,157 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:10:55,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/WALs/jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:55,161 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37565%2C1690146654356, suffix=, logDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/WALs/jenkins-hbase4.apache.org,37565,1690146654356, archiveDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/oldWALs, maxLogs=10 2023-07-23 21:10:55,177 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK] 2023-07-23 21:10:55,179 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK] 2023-07-23 21:10:55,179 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK] 2023-07-23 21:10:55,186 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/WALs/jenkins-hbase4.apache.org,37565,1690146654356/jenkins-hbase4.apache.org%2C37565%2C1690146654356.1690146655161 2023-07-23 21:10:55,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK], DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK], DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK]] 2023-07-23 21:10:55,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:55,187 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:55,187 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:55,187 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:55,190 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:55,191 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 21:10:55,192 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 21:10:55,193 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:55,193 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:55,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:55,196 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:10:55,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:55,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11140387520, jitterRate=0.037529438734054565}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:55,199 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:10:55,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 21:10:55,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 21:10:55,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 21:10:55,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 21:10:55,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-23 21:10:55,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-23 21:10:55,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 21:10:55,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 21:10:55,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 21:10:55,205 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 21:10:55,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 21:10:55,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 21:10:55,208 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:55,208 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 21:10:55,208 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 21:10:55,209 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 21:10:55,212 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:55,212 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:55,212 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:55,212 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:55,212 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37565,1690146654356, sessionid=0x1019405cb610000, setting cluster-up flag (Was=false) 2023-07-23 21:10:55,215 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:55,220 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:55,226 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 21:10:55,228 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:55,231 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:55,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 21:10:55,241 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:55,241 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.hbase-snapshot/.tmp 2023-07-23 21:10:55,244 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 21:10:55,244 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 21:10:55,246 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 21:10:55,247 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 21:10:55,247 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-23 21:10:55,249 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:55,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 21:10:55,270 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:10:55,270 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:10:55,271 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:10:55,271 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:10:55,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:55,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:55,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:55,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:10:55,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 21:10:55,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:55,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,285 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(951): ClusterId : 4a6bb142-d6e3-4ef8-847c-6a6982d16794 2023-07-23 21:10:55,285 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(951): ClusterId : 4a6bb142-d6e3-4ef8-847c-6a6982d16794 2023-07-23 21:10:55,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690146685285 2023-07-23 21:10:55,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 21:10:55,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 21:10:55,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 21:10:55,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 21:10:55,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 21:10:55,286 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 21:10:55,288 DEBUG [RS:0;jenkins-hbase4:41449] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:55,288 DEBUG [RS:1;jenkins-hbase4:38003] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:55,286 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(951): ClusterId : 4a6bb142-d6e3-4ef8-847c-6a6982d16794 2023-07-23 21:10:55,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,288 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 21:10:55,288 DEBUG [RS:2;jenkins-hbase4:37991] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:10:55,288 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 21:10:55,290 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:55,290 DEBUG [RS:1;jenkins-hbase4:38003] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:55,290 DEBUG [RS:1;jenkins-hbase4:38003] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:55,292 DEBUG [RS:2;jenkins-hbase4:37991] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:55,292 DEBUG [RS:2;jenkins-hbase4:37991] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:55,293 DEBUG [RS:1;jenkins-hbase4:38003] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:55,294 DEBUG [RS:2;jenkins-hbase4:37991] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:55,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 21:10:55,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 21:10:55,299 DEBUG [RS:0;jenkins-hbase4:41449] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:10:55,299 DEBUG [RS:0;jenkins-hbase4:41449] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:10:55,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 21:10:55,302 DEBUG [RS:1;jenkins-hbase4:38003] zookeeper.ReadOnlyZKClient(139): Connect 0x6dead954 to 127.0.0.1:50825 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:55,303 DEBUG [RS:2;jenkins-hbase4:37991] zookeeper.ReadOnlyZKClient(139): Connect 0x346eebfe to 127.0.0.1:50825 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:55,303 DEBUG [RS:0;jenkins-hbase4:41449] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:10:55,307 DEBUG [RS:0;jenkins-hbase4:41449] zookeeper.ReadOnlyZKClient(139): Connect 0x4f663f68 to 127.0.0.1:50825 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:55,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 21:10:55,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 21:10:55,319 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146655318,5,FailOnTimeoutGroup] 2023-07-23 21:10:55,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146655323,5,FailOnTimeoutGroup] 2023-07-23 21:10:55,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 21:10:55,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,349 DEBUG [RS:0;jenkins-hbase4:41449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41a1c8f4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:55,349 DEBUG [RS:0;jenkins-hbase4:41449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@459e5418, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:55,350 DEBUG [RS:1;jenkins-hbase4:38003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6712b224, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:55,350 DEBUG [RS:2;jenkins-hbase4:37991] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3e96808e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:55,350 DEBUG [RS:1;jenkins-hbase4:38003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@221020a2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:55,350 DEBUG [RS:2;jenkins-hbase4:37991] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b941938, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:55,354 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:55,354 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:55,355 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874 2023-07-23 21:10:55,362 DEBUG [RS:0;jenkins-hbase4:41449] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:41449 2023-07-23 21:10:55,362 INFO [RS:0;jenkins-hbase4:41449] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:55,362 DEBUG [RS:1;jenkins-hbase4:38003] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38003 2023-07-23 21:10:55,362 INFO [RS:1;jenkins-hbase4:38003] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:55,362 INFO [RS:1;jenkins-hbase4:38003] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:55,362 DEBUG [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:55,362 INFO [RS:0;jenkins-hbase4:41449] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:55,362 DEBUG [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:55,364 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37565,1690146654356 with isa=jenkins-hbase4.apache.org/172.31.14.131:41449, startcode=1690146654570 2023-07-23 21:10:55,364 DEBUG [RS:0;jenkins-hbase4:41449] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:55,364 DEBUG [RS:2;jenkins-hbase4:37991] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:37991 2023-07-23 21:10:55,364 INFO [RS:2;jenkins-hbase4:37991] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:10:55,364 INFO [RS:2;jenkins-hbase4:37991] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:10:55,364 DEBUG [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:10:55,365 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37565,1690146654356 with isa=jenkins-hbase4.apache.org/172.31.14.131:37991, startcode=1690146654910 2023-07-23 21:10:55,365 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37565,1690146654356 with isa=jenkins-hbase4.apache.org/172.31.14.131:38003, startcode=1690146654748 2023-07-23 21:10:55,365 DEBUG [RS:2;jenkins-hbase4:37991] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:55,365 DEBUG [RS:1;jenkins-hbase4:38003] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:10:55,367 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56777, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:55,368 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45255, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:55,368 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53817, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:10:55,369 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37565] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:55,370 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:55,370 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 21:10:55,371 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37565] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:55,371 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:55,371 DEBUG [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874 2023-07-23 21:10:55,371 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37565] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:55,371 DEBUG [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39917 2023-07-23 21:10:55,371 DEBUG [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874 2023-07-23 21:10:55,371 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 21:10:55,371 DEBUG [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39917 2023-07-23 21:10:55,371 DEBUG [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36319 2023-07-23 21:10:55,371 DEBUG [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36319 2023-07-23 21:10:55,371 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:10:55,372 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 21:10:55,371 DEBUG [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874 2023-07-23 21:10:55,372 DEBUG [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39917 2023-07-23 21:10:55,372 DEBUG [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36319 2023-07-23 21:10:55,377 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:55,379 DEBUG [RS:0;jenkins-hbase4:41449] zookeeper.ZKUtil(162): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:55,379 WARN [RS:0;jenkins-hbase4:41449] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:55,379 INFO [RS:0;jenkins-hbase4:41449] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:55,379 DEBUG [RS:2;jenkins-hbase4:37991] zookeeper.ZKUtil(162): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:55,379 DEBUG [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:55,380 WARN [RS:2;jenkins-hbase4:37991] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:55,380 DEBUG [RS:1;jenkins-hbase4:38003] zookeeper.ZKUtil(162): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:55,380 INFO [RS:2;jenkins-hbase4:37991] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:55,380 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38003,1690146654748] 2023-07-23 21:10:55,380 DEBUG [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:55,380 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41449,1690146654570] 2023-07-23 21:10:55,380 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37991,1690146654910] 2023-07-23 21:10:55,380 WARN [RS:1;jenkins-hbase4:38003] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:10:55,380 INFO [RS:1;jenkins-hbase4:38003] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:55,380 DEBUG [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:55,388 DEBUG [RS:0;jenkins-hbase4:41449] zookeeper.ZKUtil(162): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:55,388 DEBUG [RS:1;jenkins-hbase4:38003] zookeeper.ZKUtil(162): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:55,388 DEBUG [RS:0;jenkins-hbase4:41449] zookeeper.ZKUtil(162): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:55,388 DEBUG [RS:2;jenkins-hbase4:37991] zookeeper.ZKUtil(162): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:55,388 DEBUG [RS:1;jenkins-hbase4:38003] zookeeper.ZKUtil(162): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:55,389 DEBUG [RS:0;jenkins-hbase4:41449] zookeeper.ZKUtil(162): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:55,389 DEBUG [RS:2;jenkins-hbase4:37991] zookeeper.ZKUtil(162): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:55,389 DEBUG [RS:1;jenkins-hbase4:38003] zookeeper.ZKUtil(162): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:55,389 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:55,389 DEBUG [RS:2;jenkins-hbase4:37991] zookeeper.ZKUtil(162): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:55,390 DEBUG [RS:0;jenkins-hbase4:41449] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:55,390 DEBUG [RS:1;jenkins-hbase4:38003] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:55,390 DEBUG [RS:2;jenkins-hbase4:37991] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:10:55,390 INFO [RS:0;jenkins-hbase4:41449] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:55,390 INFO [RS:1;jenkins-hbase4:38003] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:55,390 INFO [RS:2;jenkins-hbase4:37991] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:10:55,391 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:10:55,392 INFO [RS:0;jenkins-hbase4:41449] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:55,392 INFO [RS:0;jenkins-hbase4:41449] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:55,392 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,392 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:55,392 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/info 2023-07-23 21:10:55,393 INFO [RS:2;jenkins-hbase4:37991] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:55,393 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:10:55,394 INFO [RS:2;jenkins-hbase4:37991] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:55,394 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,394 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:55,394 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,394 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:55,394 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,394 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:10:55,394 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,395 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,395 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,395 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,395 INFO [RS:1;jenkins-hbase4:38003] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:10:55,396 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:55,397 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,397 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,397 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,397 DEBUG [RS:0;jenkins-hbase4:41449] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,399 INFO [RS:1;jenkins-hbase4:38003] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:10:55,399 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,400 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:10:55,400 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:10:55,401 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:55,401 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:10:55,402 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/table 2023-07-23 21:10:55,402 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,402 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:10:55,404 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:10:55,405 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:55,409 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,410 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,410 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,410 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,410 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,410 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,410 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,410 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,410 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,410 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:55,410 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,410 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,410 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,411 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,411 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,411 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740 2023-07-23 21:10:55,411 DEBUG [RS:2;jenkins-hbase4:37991] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,411 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,411 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,411 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,411 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740 2023-07-23 21:10:55,411 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,411 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:10:55,412 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,412 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,412 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,412 DEBUG [RS:1;jenkins-hbase4:38003] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:10:55,417 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:10:55,418 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:10:55,419 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,419 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,419 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,419 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,426 INFO [RS:0;jenkins-hbase4:41449] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:55,426 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41449,1690146654570-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,435 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,435 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,435 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,435 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,435 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:55,436 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11596379680, jitterRate=0.07999701797962189}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:10:55,436 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:10:55,436 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:10:55,436 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:10:55,436 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:10:55,436 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:10:55,437 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:10:55,438 INFO [RS:2;jenkins-hbase4:37991] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:55,438 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37991,1690146654910-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,446 INFO [RS:1;jenkins-hbase4:38003] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:10:55,447 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38003,1690146654748-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,447 INFO [RS:0;jenkins-hbase4:41449] regionserver.Replication(203): jenkins-hbase4.apache.org,41449,1690146654570 started 2023-07-23 21:10:55,447 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41449,1690146654570, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41449, sessionid=0x1019405cb610001 2023-07-23 21:10:55,448 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:10:55,448 DEBUG [RS:0;jenkins-hbase4:41449] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:55,448 DEBUG [RS:0;jenkins-hbase4:41449] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:55,448 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:10:55,448 DEBUG [RS:0;jenkins-hbase4:41449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41449,1690146654570' 2023-07-23 21:10:55,448 DEBUG [RS:0;jenkins-hbase4:41449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:55,449 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 21:10:55,449 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 21:10:55,449 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 21:10:55,450 DEBUG [RS:0;jenkins-hbase4:41449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:55,451 DEBUG [RS:0;jenkins-hbase4:41449] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:55,455 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 21:10:55,455 DEBUG [RS:0;jenkins-hbase4:41449] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:55,455 DEBUG [RS:0;jenkins-hbase4:41449] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:55,455 DEBUG [RS:0;jenkins-hbase4:41449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41449,1690146654570' 2023-07-23 21:10:55,455 DEBUG [RS:0;jenkins-hbase4:41449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,457 DEBUG [RS:0;jenkins-hbase4:41449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,457 DEBUG [RS:0;jenkins-hbase4:41449] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:55,457 INFO [RS:0;jenkins-hbase4:41449] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 21:10:55,457 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 21:10:55,460 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,461 DEBUG [RS:0;jenkins-hbase4:41449] zookeeper.ZKUtil(398): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 21:10:55,461 INFO [RS:0;jenkins-hbase4:41449] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 21:10:55,461 INFO [RS:2;jenkins-hbase4:37991] regionserver.Replication(203): jenkins-hbase4.apache.org,37991,1690146654910 started 2023-07-23 21:10:55,461 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37991,1690146654910, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37991, sessionid=0x1019405cb610003 2023-07-23 21:10:55,462 DEBUG [RS:2;jenkins-hbase4:37991] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:55,462 DEBUG [RS:2;jenkins-hbase4:37991] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:55,462 DEBUG [RS:2;jenkins-hbase4:37991] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37991,1690146654910' 2023-07-23 21:10:55,462 DEBUG [RS:2;jenkins-hbase4:37991] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:55,462 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,462 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,462 DEBUG [RS:2;jenkins-hbase4:37991] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:55,463 DEBUG [RS:2;jenkins-hbase4:37991] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:55,463 DEBUG [RS:2;jenkins-hbase4:37991] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:55,463 DEBUG [RS:2;jenkins-hbase4:37991] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:55,463 DEBUG [RS:2;jenkins-hbase4:37991] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37991,1690146654910' 2023-07-23 21:10:55,463 DEBUG [RS:2;jenkins-hbase4:37991] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,464 DEBUG [RS:2;jenkins-hbase4:37991] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,464 DEBUG [RS:2;jenkins-hbase4:37991] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:55,464 INFO [RS:2;jenkins-hbase4:37991] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 21:10:55,464 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,466 DEBUG [RS:2;jenkins-hbase4:37991] zookeeper.ZKUtil(398): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 21:10:55,466 INFO [RS:2;jenkins-hbase4:37991] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 21:10:55,466 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,466 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,468 INFO [RS:1;jenkins-hbase4:38003] regionserver.Replication(203): jenkins-hbase4.apache.org,38003,1690146654748 started 2023-07-23 21:10:55,468 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38003,1690146654748, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38003, sessionid=0x1019405cb610002 2023-07-23 21:10:55,468 DEBUG [RS:1;jenkins-hbase4:38003] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:10:55,468 DEBUG [RS:1;jenkins-hbase4:38003] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:55,468 DEBUG [RS:1;jenkins-hbase4:38003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38003,1690146654748' 2023-07-23 21:10:55,468 DEBUG [RS:1;jenkins-hbase4:38003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:10:55,468 DEBUG [RS:1;jenkins-hbase4:38003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:10:55,469 DEBUG [RS:1;jenkins-hbase4:38003] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:10:55,469 DEBUG [RS:1;jenkins-hbase4:38003] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:10:55,469 DEBUG [RS:1;jenkins-hbase4:38003] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:55,469 DEBUG [RS:1;jenkins-hbase4:38003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38003,1690146654748' 2023-07-23 21:10:55,469 DEBUG [RS:1;jenkins-hbase4:38003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:10:55,469 DEBUG [RS:1;jenkins-hbase4:38003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:10:55,470 DEBUG [RS:1;jenkins-hbase4:38003] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:10:55,470 INFO [RS:1;jenkins-hbase4:38003] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 21:10:55,470 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,470 DEBUG [RS:1;jenkins-hbase4:38003] zookeeper.ZKUtil(398): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 21:10:55,470 INFO [RS:1;jenkins-hbase4:38003] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 21:10:55,470 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,470 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,567 INFO [RS:0;jenkins-hbase4:41449] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41449%2C1690146654570, suffix=, logDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,41449,1690146654570, archiveDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/oldWALs, maxLogs=32 2023-07-23 21:10:55,568 INFO [RS:2;jenkins-hbase4:37991] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37991%2C1690146654910, suffix=, logDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,37991,1690146654910, archiveDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/oldWALs, maxLogs=32 2023-07-23 21:10:55,572 INFO [RS:1;jenkins-hbase4:38003] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38003%2C1690146654748, suffix=, logDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,38003,1690146654748, archiveDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/oldWALs, maxLogs=32 2023-07-23 21:10:55,601 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK] 2023-07-23 21:10:55,604 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK] 2023-07-23 21:10:55,605 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK] 2023-07-23 21:10:55,608 DEBUG [jenkins-hbase4:37565] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 21:10:55,608 DEBUG [jenkins-hbase4:37565] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:55,608 DEBUG [jenkins-hbase4:37565] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:55,608 DEBUG [jenkins-hbase4:37565] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:55,608 DEBUG [jenkins-hbase4:37565] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:55,608 DEBUG [jenkins-hbase4:37565] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:55,613 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK] 2023-07-23 21:10:55,613 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41449,1690146654570, state=OPENING 2023-07-23 21:10:55,613 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK] 2023-07-23 21:10:55,614 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK] 2023-07-23 21:10:55,614 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK] 2023-07-23 21:10:55,615 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK] 2023-07-23 21:10:55,616 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK] 2023-07-23 21:10:55,619 INFO [RS:0;jenkins-hbase4:41449] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,41449,1690146654570/jenkins-hbase4.apache.org%2C41449%2C1690146654570.1690146655568 2023-07-23 21:10:55,619 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 21:10:55,621 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:55,621 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:10:55,621 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41449,1690146654570}] 2023-07-23 21:10:55,622 DEBUG [RS:0;jenkins-hbase4:41449] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK], DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK], DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK]] 2023-07-23 21:10:55,627 INFO [RS:2;jenkins-hbase4:37991] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,37991,1690146654910/jenkins-hbase4.apache.org%2C37991%2C1690146654910.1690146655569 2023-07-23 21:10:55,631 INFO [RS:1;jenkins-hbase4:38003] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,38003,1690146654748/jenkins-hbase4.apache.org%2C38003%2C1690146654748.1690146655573 2023-07-23 21:10:55,631 DEBUG [RS:2;jenkins-hbase4:37991] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK], DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK], DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK]] 2023-07-23 21:10:55,634 DEBUG [RS:1;jenkins-hbase4:38003] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK], DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK], DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK]] 2023-07-23 21:10:55,787 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:55,788 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:55,789 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54284, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:55,794 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 21:10:55,794 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:10:55,796 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41449%2C1690146654570.meta, suffix=.meta, logDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,41449,1690146654570, archiveDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/oldWALs, maxLogs=32 2023-07-23 21:10:55,813 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK] 2023-07-23 21:10:55,813 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK] 2023-07-23 21:10:55,813 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK] 2023-07-23 21:10:55,815 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/WALs/jenkins-hbase4.apache.org,41449,1690146654570/jenkins-hbase4.apache.org%2C41449%2C1690146654570.meta.1690146655796.meta 2023-07-23 21:10:55,815 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38061,DS-26b4ec3a-5a01-4dd9-a118-eeb23f2f3d56,DISK], DatanodeInfoWithStorage[127.0.0.1:43401,DS-652909dd-d134-4d14-90e7-e12341832a4b,DISK], DatanodeInfoWithStorage[127.0.0.1:36693,DS-7040b4a9-319d-443f-ab3a-268e2e0b7c79,DISK]] 2023-07-23 21:10:55,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:55,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:10:55,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 21:10:55,816 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 21:10:55,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 21:10:55,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:55,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 21:10:55,816 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 21:10:55,817 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:10:55,818 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/info 2023-07-23 21:10:55,819 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/info 2023-07-23 21:10:55,819 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:10:55,819 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:55,820 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:10:55,820 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:10:55,821 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:10:55,821 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:10:55,821 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:55,821 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:10:55,822 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/table 2023-07-23 21:10:55,822 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/table 2023-07-23 21:10:55,823 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:10:55,823 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:55,824 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740 2023-07-23 21:10:55,825 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740 2023-07-23 21:10:55,828 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:10:55,830 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:10:55,831 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11397733280, jitterRate=0.06149663031101227}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:10:55,831 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:10:55,832 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690146655787 2023-07-23 21:10:55,837 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 21:10:55,838 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 21:10:55,838 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41449,1690146654570, state=OPEN 2023-07-23 21:10:55,841 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:10:55,841 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:10:55,842 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 21:10:55,843 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41449,1690146654570 in 220 msec 2023-07-23 21:10:55,844 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 21:10:55,844 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 393 msec 2023-07-23 21:10:55,846 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 598 msec 2023-07-23 21:10:55,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690146655846, completionTime=-1 2023-07-23 21:10:55,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 21:10:55,846 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 21:10:55,849 DEBUG [hconnection-0x27b812c7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:55,851 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54286, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:55,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 21:10:55,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690146715853 2023-07-23 21:10:55,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690146775853 2023-07-23 21:10:55,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-07-23 21:10:55,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37565,1690146654356-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37565,1690146654356-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37565,1690146654356-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37565, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:55,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 21:10:55,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:55,860 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 21:10:55,861 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 21:10:55,861 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:55,862 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:55,863 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:55,864 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6 empty. 2023-07-23 21:10:55,864 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:55,864 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 21:10:55,866 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37565,1690146654356] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:55,867 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37565,1690146654356] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 21:10:55,869 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:55,869 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:55,871 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:55,871 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302 empty. 2023-07-23 21:10:55,872 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:55,872 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 21:10:55,880 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:55,881 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5b60133f8cfb7d682e9e2b591dece6b6, NAME => 'hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp 2023-07-23 21:10:55,892 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:55,894 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => cb531debb90d21a49a8f44fe35ec3302, NAME => 'hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp 2023-07-23 21:10:55,907 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:55,907 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5b60133f8cfb7d682e9e2b591dece6b6, disabling compactions & flushes 2023-07-23 21:10:55,907 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:55,907 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:55,907 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. after waiting 0 ms 2023-07-23 21:10:55,907 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:55,907 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:55,907 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5b60133f8cfb7d682e9e2b591dece6b6: 2023-07-23 21:10:55,910 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:55,911 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146655911"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146655911"}]},"ts":"1690146655911"} 2023-07-23 21:10:55,917 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:55,918 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:55,918 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146655918"}]},"ts":"1690146655918"} 2023-07-23 21:10:55,919 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:55,919 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing cb531debb90d21a49a8f44fe35ec3302, disabling compactions & flushes 2023-07-23 21:10:55,919 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:55,919 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:55,919 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. after waiting 0 ms 2023-07-23 21:10:55,919 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:55,919 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:55,919 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for cb531debb90d21a49a8f44fe35ec3302: 2023-07-23 21:10:55,920 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 21:10:55,922 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:55,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:55,925 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146655925"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146655925"}]},"ts":"1690146655925"} 2023-07-23 21:10:55,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:55,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:55,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:55,925 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:55,925 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5b60133f8cfb7d682e9e2b591dece6b6, ASSIGN}] 2023-07-23 21:10:55,927 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:55,927 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5b60133f8cfb7d682e9e2b591dece6b6, ASSIGN 2023-07-23 21:10:55,928 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:55,928 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5b60133f8cfb7d682e9e2b591dece6b6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37991,1690146654910; forceNewPlan=false, retain=false 2023-07-23 21:10:55,928 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146655928"}]},"ts":"1690146655928"} 2023-07-23 21:10:55,929 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 21:10:55,932 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:55,932 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:55,932 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:55,932 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:55,932 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:55,932 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=cb531debb90d21a49a8f44fe35ec3302, ASSIGN}] 2023-07-23 21:10:55,934 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=cb531debb90d21a49a8f44fe35ec3302, ASSIGN 2023-07-23 21:10:55,935 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=cb531debb90d21a49a8f44fe35ec3302, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38003,1690146654748; forceNewPlan=false, retain=false 2023-07-23 21:10:55,935 INFO [jenkins-hbase4:37565] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-23 21:10:55,936 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=5b60133f8cfb7d682e9e2b591dece6b6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:55,936 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146655936"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146655936"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146655936"}]},"ts":"1690146655936"} 2023-07-23 21:10:55,937 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=cb531debb90d21a49a8f44fe35ec3302, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:55,937 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146655937"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146655937"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146655937"}]},"ts":"1690146655937"} 2023-07-23 21:10:55,938 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 5b60133f8cfb7d682e9e2b591dece6b6, server=jenkins-hbase4.apache.org,37991,1690146654910}] 2023-07-23 21:10:55,938 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure cb531debb90d21a49a8f44fe35ec3302, server=jenkins-hbase4.apache.org,38003,1690146654748}] 2023-07-23 21:10:56,091 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:56,091 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:56,091 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:56,091 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:10:56,093 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45244, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:56,093 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49506, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:10:56,097 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:56,098 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5b60133f8cfb7d682e9e2b591dece6b6, NAME => 'hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cb531debb90d21a49a8f44fe35ec3302, NAME => 'hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. service=MultiRowMutationService 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:56,098 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:56,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:56,099 INFO [StoreOpener-5b60133f8cfb7d682e9e2b591dece6b6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:56,099 INFO [StoreOpener-cb531debb90d21a49a8f44fe35ec3302-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:56,101 DEBUG [StoreOpener-5b60133f8cfb7d682e9e2b591dece6b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6/info 2023-07-23 21:10:56,101 DEBUG [StoreOpener-5b60133f8cfb7d682e9e2b591dece6b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6/info 2023-07-23 21:10:56,101 DEBUG [StoreOpener-cb531debb90d21a49a8f44fe35ec3302-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302/m 2023-07-23 21:10:56,101 DEBUG [StoreOpener-cb531debb90d21a49a8f44fe35ec3302-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302/m 2023-07-23 21:10:56,101 INFO [StoreOpener-5b60133f8cfb7d682e9e2b591dece6b6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5b60133f8cfb7d682e9e2b591dece6b6 columnFamilyName info 2023-07-23 21:10:56,101 INFO [StoreOpener-cb531debb90d21a49a8f44fe35ec3302-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cb531debb90d21a49a8f44fe35ec3302 columnFamilyName m 2023-07-23 21:10:56,102 INFO [StoreOpener-5b60133f8cfb7d682e9e2b591dece6b6-1] regionserver.HStore(310): Store=5b60133f8cfb7d682e9e2b591dece6b6/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:56,102 INFO [StoreOpener-cb531debb90d21a49a8f44fe35ec3302-1] regionserver.HStore(310): Store=cb531debb90d21a49a8f44fe35ec3302/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:56,102 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:56,103 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:56,103 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:56,103 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:56,106 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:56,106 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:56,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:56,111 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cb531debb90d21a49a8f44fe35ec3302; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4fa5e11d, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:56,111 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:56,111 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cb531debb90d21a49a8f44fe35ec3302: 2023-07-23 21:10:56,111 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5b60133f8cfb7d682e9e2b591dece6b6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10712125760, jitterRate=-0.00235554575920105}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:56,111 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5b60133f8cfb7d682e9e2b591dece6b6: 2023-07-23 21:10:56,111 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302., pid=9, masterSystemTime=1690146656091 2023-07-23 21:10:56,114 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6., pid=8, masterSystemTime=1690146656091 2023-07-23 21:10:56,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:56,117 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:56,117 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=cb531debb90d21a49a8f44fe35ec3302, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:56,117 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146656117"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146656117"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146656117"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146656117"}]},"ts":"1690146656117"} 2023-07-23 21:10:56,117 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:56,118 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:56,118 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=5b60133f8cfb7d682e9e2b591dece6b6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:56,118 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146656118"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146656118"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146656118"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146656118"}]},"ts":"1690146656118"} 2023-07-23 21:10:56,120 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-23 21:10:56,121 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure cb531debb90d21a49a8f44fe35ec3302, server=jenkins-hbase4.apache.org,38003,1690146654748 in 181 msec 2023-07-23 21:10:56,121 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-23 21:10:56,121 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 5b60133f8cfb7d682e9e2b591dece6b6, server=jenkins-hbase4.apache.org,37991,1690146654910 in 181 msec 2023-07-23 21:10:56,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-23 21:10:56,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=cb531debb90d21a49a8f44fe35ec3302, ASSIGN in 189 msec 2023-07-23 21:10:56,123 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-23 21:10:56,123 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5b60133f8cfb7d682e9e2b591dece6b6, ASSIGN in 196 msec 2023-07-23 21:10:56,124 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:56,124 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656124"}]},"ts":"1690146656124"} 2023-07-23 21:10:56,124 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:56,124 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656124"}]},"ts":"1690146656124"} 2023-07-23 21:10:56,125 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 21:10:56,126 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 21:10:56,128 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:56,129 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:56,130 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 262 msec 2023-07-23 21:10:56,130 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 270 msec 2023-07-23 21:10:56,161 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 21:10:56,162 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:56,162 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:56,166 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:56,167 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45248, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:56,171 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 21:10:56,173 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37565,1690146654356] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:56,175 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49514, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:56,175 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 21:10:56,175 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 21:10:56,180 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:56,183 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:56,183 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:56,184 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-23 21:10:56,185 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:10:56,186 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37565,1690146654356] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 21:10:56,193 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 21:10:56,201 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:56,204 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-07-23 21:10:56,220 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 21:10:56,223 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 21:10:56,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.132sec 2023-07-23 21:10:56,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-23 21:10:56,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:56,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-23 21:10:56,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-23 21:10:56,226 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:56,227 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:56,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-23 21:10:56,228 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/quota/425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:56,229 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/quota/425a5daee36fc226b3152df4349c1dff empty. 2023-07-23 21:10:56,230 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/quota/425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:56,230 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-23 21:10:56,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-23 21:10:56,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-23 21:10:56,236 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:56,236 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:10:56,236 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 21:10:56,236 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 21:10:56,236 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37565,1690146654356-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 21:10:56,237 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37565,1690146654356-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 21:10:56,243 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 21:10:56,251 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:56,252 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 425a5daee36fc226b3152df4349c1dff, NAME => 'hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp 2023-07-23 21:10:56,288 DEBUG [Listener at localhost/44181] zookeeper.ReadOnlyZKClient(139): Connect 0x40629047 to 127.0.0.1:50825 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:56,294 DEBUG [Listener at localhost/44181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a62c2a6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:56,295 DEBUG [hconnection-0x6e9572c4-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:10:56,298 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54300, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:10:56,299 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:56,300 INFO [Listener at localhost/44181] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:10:56,302 DEBUG [Listener at localhost/44181] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 21:10:56,304 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45206, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 21:10:56,309 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 21:10:56,309 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:56,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 21:10:56,310 DEBUG [Listener at localhost/44181] zookeeper.ReadOnlyZKClient(139): Connect 0x191c9b36 to 127.0.0.1:50825 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:10:56,314 DEBUG [Listener at localhost/44181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c5ad983, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:10:56,315 INFO [Listener at localhost/44181] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:50825 2023-07-23 21:10:56,317 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:56,318 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019405cb61000a connected 2023-07-23 21:10:56,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-23 21:10:56,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-23 21:10:56,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-23 21:10:56,332 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:56,337 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 15 msec 2023-07-23 21:10:56,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-23 21:10:56,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:56,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-23 21:10:56,437 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:56,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 14 2023-07-23 21:10:56,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-23 21:10:56,439 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:56,439 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:10:56,442 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:10:56,443 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:56,444 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7 empty. 2023-07-23 21:10:56,444 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:56,445 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-23 21:10:56,459 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-23 21:10:56,461 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 25d6e02cd84117e7d88af3cdf575a5f7, NAME => 'np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp 2023-07-23 21:10:56,469 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:56,469 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 25d6e02cd84117e7d88af3cdf575a5f7, disabling compactions & flushes 2023-07-23 21:10:56,469 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:56,469 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:56,469 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. after waiting 0 ms 2023-07-23 21:10:56,469 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:56,469 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:56,469 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 25d6e02cd84117e7d88af3cdf575a5f7: 2023-07-23 21:10:56,471 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:56,472 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146656472"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146656472"}]},"ts":"1690146656472"} 2023-07-23 21:10:56,474 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:56,474 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:56,475 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656474"}]},"ts":"1690146656474"} 2023-07-23 21:10:56,476 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-23 21:10:56,479 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:56,479 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:56,479 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:56,479 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:56,479 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:56,479 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=25d6e02cd84117e7d88af3cdf575a5f7, ASSIGN}] 2023-07-23 21:10:56,482 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=25d6e02cd84117e7d88af3cdf575a5f7, ASSIGN 2023-07-23 21:10:56,483 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=25d6e02cd84117e7d88af3cdf575a5f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41449,1690146654570; forceNewPlan=false, retain=false 2023-07-23 21:10:56,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-23 21:10:56,633 INFO [jenkins-hbase4:37565] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:56,634 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=25d6e02cd84117e7d88af3cdf575a5f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:56,635 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146656634"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146656634"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146656634"}]},"ts":"1690146656634"} 2023-07-23 21:10:56,638 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE; OpenRegionProcedure 25d6e02cd84117e7d88af3cdf575a5f7, server=jenkins-hbase4.apache.org,41449,1690146654570}] 2023-07-23 21:10:56,667 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:56,667 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 425a5daee36fc226b3152df4349c1dff, disabling compactions & flushes 2023-07-23 21:10:56,667 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:56,667 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:56,667 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. after waiting 0 ms 2023-07-23 21:10:56,667 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:56,667 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:56,667 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 425a5daee36fc226b3152df4349c1dff: 2023-07-23 21:10:56,669 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:10:56,670 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146656670"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146656670"}]},"ts":"1690146656670"} 2023-07-23 21:10:56,671 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:10:56,672 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:10:56,672 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656672"}]},"ts":"1690146656672"} 2023-07-23 21:10:56,673 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-23 21:10:56,678 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:10:56,678 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:10:56,678 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:10:56,678 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:10:56,678 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:10:56,678 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=425a5daee36fc226b3152df4349c1dff, ASSIGN}] 2023-07-23 21:10:56,679 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=425a5daee36fc226b3152df4349c1dff, ASSIGN 2023-07-23 21:10:56,680 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=425a5daee36fc226b3152df4349c1dff, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41449,1690146654570; forceNewPlan=false, retain=false 2023-07-23 21:10:56,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-23 21:10:56,794 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:56,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 25d6e02cd84117e7d88af3cdf575a5f7, NAME => 'np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:56,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:56,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:56,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:56,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:56,796 INFO [StoreOpener-25d6e02cd84117e7d88af3cdf575a5f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:56,798 DEBUG [StoreOpener-25d6e02cd84117e7d88af3cdf575a5f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7/fam1 2023-07-23 21:10:56,798 DEBUG [StoreOpener-25d6e02cd84117e7d88af3cdf575a5f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7/fam1 2023-07-23 21:10:56,798 INFO [StoreOpener-25d6e02cd84117e7d88af3cdf575a5f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 25d6e02cd84117e7d88af3cdf575a5f7 columnFamilyName fam1 2023-07-23 21:10:56,799 INFO [StoreOpener-25d6e02cd84117e7d88af3cdf575a5f7-1] regionserver.HStore(310): Store=25d6e02cd84117e7d88af3cdf575a5f7/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:56,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:56,800 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:56,802 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:56,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:56,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 25d6e02cd84117e7d88af3cdf575a5f7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10041441600, jitterRate=-0.064817875623703}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:10:56,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 25d6e02cd84117e7d88af3cdf575a5f7: 2023-07-23 21:10:56,805 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7., pid=16, masterSystemTime=1690146656790 2023-07-23 21:10:56,807 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:56,807 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:56,807 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=25d6e02cd84117e7d88af3cdf575a5f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:56,807 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146656807"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146656807"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146656807"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146656807"}]},"ts":"1690146656807"} 2023-07-23 21:10:56,810 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-23 21:10:56,810 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; OpenRegionProcedure 25d6e02cd84117e7d88af3cdf575a5f7, server=jenkins-hbase4.apache.org,41449,1690146654570 in 173 msec 2023-07-23 21:10:56,812 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=14 2023-07-23 21:10:56,812 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=14, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=25d6e02cd84117e7d88af3cdf575a5f7, ASSIGN in 331 msec 2023-07-23 21:10:56,813 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:56,813 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146656813"}]},"ts":"1690146656813"} 2023-07-23 21:10:56,814 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-23 21:10:56,816 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=14, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:56,817 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateTableProcedure table=np1:table1 in 382 msec 2023-07-23 21:10:56,830 INFO [jenkins-hbase4:37565] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:10:56,831 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=425a5daee36fc226b3152df4349c1dff, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:56,832 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146656831"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146656831"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146656831"}]},"ts":"1690146656831"} 2023-07-23 21:10:56,833 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 425a5daee36fc226b3152df4349c1dff, server=jenkins-hbase4.apache.org,41449,1690146654570}] 2023-07-23 21:10:56,988 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:56,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 425a5daee36fc226b3152df4349c1dff, NAME => 'hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:10:56,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:56,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:10:56,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:56,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:56,990 INFO [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:56,991 DEBUG [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/quota/425a5daee36fc226b3152df4349c1dff/q 2023-07-23 21:10:56,991 DEBUG [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/quota/425a5daee36fc226b3152df4349c1dff/q 2023-07-23 21:10:56,992 INFO [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 425a5daee36fc226b3152df4349c1dff columnFamilyName q 2023-07-23 21:10:56,992 INFO [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] regionserver.HStore(310): Store=425a5daee36fc226b3152df4349c1dff/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:56,992 INFO [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:56,993 DEBUG [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/quota/425a5daee36fc226b3152df4349c1dff/u 2023-07-23 21:10:56,993 DEBUG [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/quota/425a5daee36fc226b3152df4349c1dff/u 2023-07-23 21:10:56,994 INFO [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 425a5daee36fc226b3152df4349c1dff columnFamilyName u 2023-07-23 21:10:56,994 INFO [StoreOpener-425a5daee36fc226b3152df4349c1dff-1] regionserver.HStore(310): Store=425a5daee36fc226b3152df4349c1dff/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:10:56,995 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/quota/425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:56,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/quota/425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:56,997 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-23 21:10:56,998 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:57,000 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/quota/425a5daee36fc226b3152df4349c1dff/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:10:57,001 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 425a5daee36fc226b3152df4349c1dff; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9947084800, jitterRate=-0.07360553741455078}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-23 21:10:57,001 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 425a5daee36fc226b3152df4349c1dff: 2023-07-23 21:10:57,002 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff., pid=18, masterSystemTime=1690146656984 2023-07-23 21:10:57,003 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:57,003 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:57,003 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=425a5daee36fc226b3152df4349c1dff, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:57,003 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690146657003"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146657003"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146657003"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146657003"}]},"ts":"1690146657003"} 2023-07-23 21:10:57,006 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-23 21:10:57,006 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 425a5daee36fc226b3152df4349c1dff, server=jenkins-hbase4.apache.org,41449,1690146654570 in 172 msec 2023-07-23 21:10:57,008 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-23 21:10:57,008 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=425a5daee36fc226b3152df4349c1dff, ASSIGN in 328 msec 2023-07-23 21:10:57,008 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:10:57,008 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146657008"}]},"ts":"1690146657008"} 2023-07-23 21:10:57,009 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-23 21:10:57,012 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:10:57,013 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 789 msec 2023-07-23 21:10:57,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-23 21:10:57,041 INFO [Listener at localhost/44181] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 14 completed 2023-07-23 21:10:57,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:10:57,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-23 21:10:57,046 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:10:57,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-23 21:10:57,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 21:10:57,080 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=35 msec 2023-07-23 21:10:57,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 21:10:57,153 INFO [Listener at localhost/44181] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-23 21:10:57,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:10:57,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:10:57,156 INFO [Listener at localhost/44181] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-23 21:10:57,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-23 21:10:57,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-23 21:10:57,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 21:10:57,162 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146657162"}]},"ts":"1690146657162"} 2023-07-23 21:10:57,163 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-23 21:10:57,166 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-23 21:10:57,167 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=25d6e02cd84117e7d88af3cdf575a5f7, UNASSIGN}] 2023-07-23 21:10:57,168 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=25d6e02cd84117e7d88af3cdf575a5f7, UNASSIGN 2023-07-23 21:10:57,168 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=25d6e02cd84117e7d88af3cdf575a5f7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:57,168 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146657168"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146657168"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146657168"}]},"ts":"1690146657168"} 2023-07-23 21:10:57,170 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 25d6e02cd84117e7d88af3cdf575a5f7, server=jenkins-hbase4.apache.org,41449,1690146654570}] 2023-07-23 21:10:57,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 21:10:57,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:57,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 25d6e02cd84117e7d88af3cdf575a5f7, disabling compactions & flushes 2023-07-23 21:10:57,323 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:57,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:57,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. after waiting 0 ms 2023-07-23 21:10:57,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:57,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:57,328 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7. 2023-07-23 21:10:57,328 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 25d6e02cd84117e7d88af3cdf575a5f7: 2023-07-23 21:10:57,329 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:57,330 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=25d6e02cd84117e7d88af3cdf575a5f7, regionState=CLOSED 2023-07-23 21:10:57,330 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146657330"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146657330"}]},"ts":"1690146657330"} 2023-07-23 21:10:57,333 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-23 21:10:57,333 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 25d6e02cd84117e7d88af3cdf575a5f7, server=jenkins-hbase4.apache.org,41449,1690146654570 in 161 msec 2023-07-23 21:10:57,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-23 21:10:57,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=25d6e02cd84117e7d88af3cdf575a5f7, UNASSIGN in 166 msec 2023-07-23 21:10:57,335 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146657335"}]},"ts":"1690146657335"} 2023-07-23 21:10:57,336 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-23 21:10:57,338 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-23 21:10:57,339 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 183 msec 2023-07-23 21:10:57,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 21:10:57,464 INFO [Listener at localhost/44181] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-23 21:10:57,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-23 21:10:57,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-23 21:10:57,467 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 21:10:57,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-23 21:10:57,468 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 21:10:57,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:10:57,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:10:57,472 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:57,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-23 21:10:57,474 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7/fam1, FileablePath, hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7/recovered.edits] 2023-07-23 21:10:57,480 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7/recovered.edits/4.seqid to hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/archive/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7/recovered.edits/4.seqid 2023-07-23 21:10:57,480 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/.tmp/data/np1/table1/25d6e02cd84117e7d88af3cdf575a5f7 2023-07-23 21:10:57,480 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-23 21:10:57,483 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 21:10:57,484 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-23 21:10:57,486 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-23 21:10:57,487 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 21:10:57,487 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-23 21:10:57,487 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146657487"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:57,488 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:10:57,488 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 25d6e02cd84117e7d88af3cdf575a5f7, NAME => 'np1:table1,,1690146656433.25d6e02cd84117e7d88af3cdf575a5f7.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:10:57,488 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-23 21:10:57,489 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146657488"}]},"ts":"9223372036854775807"} 2023-07-23 21:10:57,490 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-23 21:10:57,492 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 21:10:57,493 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 27 msec 2023-07-23 21:10:57,562 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-23 21:10:57,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-23 21:10:57,575 INFO [Listener at localhost/44181] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-23 21:10:57,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-23 21:10:57,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-23 21:10:57,598 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 21:10:57,601 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 21:10:57,604 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 21:10:57,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 21:10:57,606 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-23 21:10:57,606 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:10:57,607 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 21:10:57,609 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 21:10:57,610 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 25 msec 2023-07-23 21:10:57,635 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-23 21:10:57,635 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-23 21:10:57,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37565] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 21:10:57,706 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 21:10:57,706 INFO [Listener at localhost/44181] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 21:10:57,707 DEBUG [Listener at localhost/44181] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x40629047 to 127.0.0.1:50825 2023-07-23 21:10:57,707 DEBUG [Listener at localhost/44181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:57,707 DEBUG [Listener at localhost/44181] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 21:10:57,707 DEBUG [Listener at localhost/44181] util.JVMClusterUtil(257): Found active master hash=156688301, stopped=false 2023-07-23 21:10:57,707 DEBUG [Listener at localhost/44181] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:10:57,707 DEBUG [Listener at localhost/44181] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:10:57,707 DEBUG [Listener at localhost/44181] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-23 21:10:57,707 INFO [Listener at localhost/44181] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:57,710 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:57,710 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:57,710 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:57,710 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:10:57,710 INFO [Listener at localhost/44181] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 21:10:57,710 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:57,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:57,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:57,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:57,712 DEBUG [Listener at localhost/44181] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1e176928 to 127.0.0.1:50825 2023-07-23 21:10:57,712 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:57,712 DEBUG [Listener at localhost/44181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:57,713 INFO [Listener at localhost/44181] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41449,1690146654570' ***** 2023-07-23 21:10:57,713 INFO [Listener at localhost/44181] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:57,713 INFO [Listener at localhost/44181] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38003,1690146654748' ***** 2023-07-23 21:10:57,713 INFO [Listener at localhost/44181] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:57,713 INFO [Listener at localhost/44181] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37991,1690146654910' ***** 2023-07-23 21:10:57,713 INFO [Listener at localhost/44181] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:10:57,713 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:57,714 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:57,714 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:57,715 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:57,715 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:57,715 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:57,725 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:10:57,727 INFO [RS:2;jenkins-hbase4:37991] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@20fbf257{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:57,727 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:57,727 INFO [RS:1;jenkins-hbase4:38003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@25612b5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:57,727 INFO [RS:0;jenkins-hbase4:41449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@582e1291{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:10:57,728 INFO [RS:2;jenkins-hbase4:37991] server.AbstractConnector(383): Stopped ServerConnector@2d986ee8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:57,728 INFO [RS:0;jenkins-hbase4:41449] server.AbstractConnector(383): Stopped ServerConnector@14730114{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:57,728 INFO [RS:1;jenkins-hbase4:38003] server.AbstractConnector(383): Stopped ServerConnector@601eee72{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:57,728 INFO [RS:0;jenkins-hbase4:41449] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:57,728 INFO [RS:2;jenkins-hbase4:37991] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:57,728 INFO [RS:1;jenkins-hbase4:38003] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:57,729 INFO [RS:0;jenkins-hbase4:41449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@51f99f2c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:57,731 INFO [RS:2;jenkins-hbase4:37991] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@76e8f8f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:57,731 INFO [RS:0;jenkins-hbase4:41449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6aae7eec{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:57,731 INFO [RS:1;jenkins-hbase4:38003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@199af640{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:57,731 INFO [RS:2;jenkins-hbase4:37991] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ee09bda{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:57,731 INFO [RS:1;jenkins-hbase4:38003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@404c009{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:57,731 INFO [RS:0;jenkins-hbase4:41449] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:57,732 INFO [RS:0;jenkins-hbase4:41449] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:57,732 INFO [RS:2;jenkins-hbase4:37991] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:57,732 INFO [RS:0;jenkins-hbase4:41449] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:57,732 INFO [RS:2;jenkins-hbase4:37991] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:57,732 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(3305): Received CLOSE for 425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:57,732 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:57,732 DEBUG [RS:0;jenkins-hbase4:41449] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4f663f68 to 127.0.0.1:50825 2023-07-23 21:10:57,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 425a5daee36fc226b3152df4349c1dff, disabling compactions & flushes 2023-07-23 21:10:57,732 INFO [RS:2;jenkins-hbase4:37991] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:57,733 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:57,733 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(3305): Received CLOSE for 5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:57,733 DEBUG [RS:0;jenkins-hbase4:41449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:57,733 INFO [RS:0;jenkins-hbase4:41449] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:57,732 INFO [RS:1;jenkins-hbase4:38003] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:10:57,733 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:57,734 INFO [RS:1;jenkins-hbase4:38003] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:10:57,734 DEBUG [RS:2;jenkins-hbase4:37991] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x346eebfe to 127.0.0.1:50825 2023-07-23 21:10:57,734 INFO [RS:1;jenkins-hbase4:38003] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:10:57,733 INFO [RS:0;jenkins-hbase4:41449] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:57,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5b60133f8cfb7d682e9e2b591dece6b6, disabling compactions & flushes 2023-07-23 21:10:57,733 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:57,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. after waiting 0 ms 2023-07-23 21:10:57,734 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:57,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:57,734 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(3305): Received CLOSE for cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:57,734 INFO [RS:0;jenkins-hbase4:41449] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:57,734 DEBUG [RS:2;jenkins-hbase4:37991] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:57,735 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 21:10:57,735 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:57,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:57,735 DEBUG [RS:1;jenkins-hbase4:38003] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6dead954 to 127.0.0.1:50825 2023-07-23 21:10:57,735 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 21:10:57,736 DEBUG [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1478): Online Regions={5b60133f8cfb7d682e9e2b591dece6b6=hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6.} 2023-07-23 21:10:57,736 DEBUG [RS:1;jenkins-hbase4:38003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:57,736 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 21:10:57,736 DEBUG [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1478): Online Regions={cb531debb90d21a49a8f44fe35ec3302=hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302.} 2023-07-23 21:10:57,737 DEBUG [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1504): Waiting on cb531debb90d21a49a8f44fe35ec3302 2023-07-23 21:10:57,736 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-23 21:10:57,737 DEBUG [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 425a5daee36fc226b3152df4349c1dff=hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff.} 2023-07-23 21:10:57,736 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cb531debb90d21a49a8f44fe35ec3302, disabling compactions & flushes 2023-07-23 21:10:57,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. after waiting 0 ms 2023-07-23 21:10:57,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:57,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:57,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 5b60133f8cfb7d682e9e2b591dece6b6 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-23 21:10:57,737 DEBUG [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1504): Waiting on 1588230740, 425a5daee36fc226b3152df4349c1dff 2023-07-23 21:10:57,737 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:10:57,736 DEBUG [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1504): Waiting on 5b60133f8cfb7d682e9e2b591dece6b6 2023-07-23 21:10:57,738 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:10:57,738 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:10:57,738 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:10:57,738 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:10:57,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:57,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. after waiting 0 ms 2023-07-23 21:10:57,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:57,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing cb531debb90d21a49a8f44fe35ec3302 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-23 21:10:57,739 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-23 21:10:57,739 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:57,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/quota/425a5daee36fc226b3152df4349c1dff/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:10:57,746 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:57,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 425a5daee36fc226b3152df4349c1dff: 2023-07-23 21:10:57,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690146656223.425a5daee36fc226b3152df4349c1dff. 2023-07-23 21:10:57,759 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6/.tmp/info/77d96f33c7d940d0933c601140b4e718 2023-07-23 21:10:57,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 77d96f33c7d940d0933c601140b4e718 2023-07-23 21:10:57,765 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6/.tmp/info/77d96f33c7d940d0933c601140b4e718 as hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6/info/77d96f33c7d940d0933c601140b4e718 2023-07-23 21:10:57,784 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/.tmp/info/bff74116ed6e43e28d835e7a84caf9a6 2023-07-23 21:10:57,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 77d96f33c7d940d0933c601140b4e718 2023-07-23 21:10:57,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6/info/77d96f33c7d940d0933c601140b4e718, entries=3, sequenceid=8, filesize=5.0 K 2023-07-23 21:10:57,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302/.tmp/m/0815a75f9f8f4ede8b5ce6f55a16ef4d 2023-07-23 21:10:57,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 5b60133f8cfb7d682e9e2b591dece6b6 in 51ms, sequenceid=8, compaction requested=false 2023-07-23 21:10:57,789 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 21:10:57,790 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bff74116ed6e43e28d835e7a84caf9a6 2023-07-23 21:10:57,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302/.tmp/m/0815a75f9f8f4ede8b5ce6f55a16ef4d as hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302/m/0815a75f9f8f4ede8b5ce6f55a16ef4d 2023-07-23 21:10:57,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/namespace/5b60133f8cfb7d682e9e2b591dece6b6/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-23 21:10:57,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:57,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5b60133f8cfb7d682e9e2b591dece6b6: 2023-07-23 21:10:57,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690146655859.5b60133f8cfb7d682e9e2b591dece6b6. 2023-07-23 21:10:57,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302/m/0815a75f9f8f4ede8b5ce6f55a16ef4d, entries=1, sequenceid=7, filesize=4.9 K 2023-07-23 21:10:57,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for cb531debb90d21a49a8f44fe35ec3302 in 77ms, sequenceid=7, compaction requested=false 2023-07-23 21:10:57,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 21:10:57,825 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/.tmp/rep_barrier/ee58d274fcc14dfeacb7f19bff3720b2 2023-07-23 21:10:57,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/rsgroup/cb531debb90d21a49a8f44fe35ec3302/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-23 21:10:57,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:10:57,830 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:57,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cb531debb90d21a49a8f44fe35ec3302: 2023-07-23 21:10:57,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690146655866.cb531debb90d21a49a8f44fe35ec3302. 2023-07-23 21:10:57,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee58d274fcc14dfeacb7f19bff3720b2 2023-07-23 21:10:57,842 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/.tmp/table/cdf94cd8ceb74aebb79c07112e308eb2 2023-07-23 21:10:57,848 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cdf94cd8ceb74aebb79c07112e308eb2 2023-07-23 21:10:57,848 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/.tmp/info/bff74116ed6e43e28d835e7a84caf9a6 as hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/info/bff74116ed6e43e28d835e7a84caf9a6 2023-07-23 21:10:57,854 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bff74116ed6e43e28d835e7a84caf9a6 2023-07-23 21:10:57,854 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/info/bff74116ed6e43e28d835e7a84caf9a6, entries=32, sequenceid=31, filesize=8.5 K 2023-07-23 21:10:57,855 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/.tmp/rep_barrier/ee58d274fcc14dfeacb7f19bff3720b2 as hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/rep_barrier/ee58d274fcc14dfeacb7f19bff3720b2 2023-07-23 21:10:57,862 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ee58d274fcc14dfeacb7f19bff3720b2 2023-07-23 21:10:57,862 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/rep_barrier/ee58d274fcc14dfeacb7f19bff3720b2, entries=1, sequenceid=31, filesize=4.9 K 2023-07-23 21:10:57,863 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/.tmp/table/cdf94cd8ceb74aebb79c07112e308eb2 as hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/table/cdf94cd8ceb74aebb79c07112e308eb2 2023-07-23 21:10:57,869 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cdf94cd8ceb74aebb79c07112e308eb2 2023-07-23 21:10:57,869 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/table/cdf94cd8ceb74aebb79c07112e308eb2, entries=8, sequenceid=31, filesize=5.2 K 2023-07-23 21:10:57,870 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 132ms, sequenceid=31, compaction requested=false 2023-07-23 21:10:57,870 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 21:10:57,879 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-23 21:10:57,879 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:10:57,880 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:10:57,880 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:10:57,880 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 21:10:57,937 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38003,1690146654748; all regions closed. 2023-07-23 21:10:57,937 DEBUG [RS:1;jenkins-hbase4:38003] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 21:10:57,939 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37991,1690146654910; all regions closed. 2023-07-23 21:10:57,939 DEBUG [RS:2;jenkins-hbase4:37991] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 21:10:57,939 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41449,1690146654570; all regions closed. 2023-07-23 21:10:57,939 DEBUG [RS:0;jenkins-hbase4:41449] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 21:10:57,945 DEBUG [RS:1;jenkins-hbase4:38003] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/oldWALs 2023-07-23 21:10:57,945 INFO [RS:1;jenkins-hbase4:38003] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38003%2C1690146654748:(num 1690146655573) 2023-07-23 21:10:57,945 DEBUG [RS:1;jenkins-hbase4:38003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:57,945 INFO [RS:1;jenkins-hbase4:38003] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:57,946 INFO [RS:1;jenkins-hbase4:38003] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:57,946 INFO [RS:1;jenkins-hbase4:38003] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:57,946 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:57,946 INFO [RS:1;jenkins-hbase4:38003] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:57,946 INFO [RS:1;jenkins-hbase4:38003] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:57,947 INFO [RS:1;jenkins-hbase4:38003] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38003 2023-07-23 21:10:57,950 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:57,950 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:57,950 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:57,950 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:57,950 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:57,950 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38003,1690146654748 2023-07-23 21:10:57,950 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:57,950 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38003,1690146654748] 2023-07-23 21:10:57,951 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38003,1690146654748; numProcessing=1 2023-07-23 21:10:57,951 DEBUG [RS:0;jenkins-hbase4:41449] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/oldWALs 2023-07-23 21:10:57,951 INFO [RS:0;jenkins-hbase4:41449] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41449%2C1690146654570.meta:.meta(num 1690146655796) 2023-07-23 21:10:57,951 DEBUG [RS:2;jenkins-hbase4:37991] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/oldWALs 2023-07-23 21:10:57,951 INFO [RS:2;jenkins-hbase4:37991] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37991%2C1690146654910:(num 1690146655569) 2023-07-23 21:10:57,951 DEBUG [RS:2;jenkins-hbase4:37991] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:57,951 INFO [RS:2;jenkins-hbase4:37991] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:57,952 INFO [RS:2;jenkins-hbase4:37991] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:57,952 INFO [RS:2;jenkins-hbase4:37991] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:10:57,952 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:57,952 INFO [RS:2;jenkins-hbase4:37991] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:10:57,952 INFO [RS:2;jenkins-hbase4:37991] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:10:57,952 INFO [RS:2;jenkins-hbase4:37991] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37991 2023-07-23 21:10:57,954 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38003,1690146654748 already deleted, retry=false 2023-07-23 21:10:57,955 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38003,1690146654748 expired; onlineServers=2 2023-07-23 21:10:57,958 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:57,958 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:57,958 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37991,1690146654910 2023-07-23 21:10:57,958 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37991,1690146654910] 2023-07-23 21:10:57,958 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37991,1690146654910; numProcessing=2 2023-07-23 21:10:57,960 DEBUG [RS:0;jenkins-hbase4:41449] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/oldWALs 2023-07-23 21:10:57,960 INFO [RS:0;jenkins-hbase4:41449] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41449%2C1690146654570:(num 1690146655568) 2023-07-23 21:10:57,960 DEBUG [RS:0;jenkins-hbase4:41449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:57,960 INFO [RS:0;jenkins-hbase4:41449] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:10:57,960 INFO [RS:0;jenkins-hbase4:41449] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:57,960 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:57,961 INFO [RS:0;jenkins-hbase4:41449] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41449 2023-07-23 21:10:57,961 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37991,1690146654910 already deleted, retry=false 2023-07-23 21:10:57,961 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37991,1690146654910 expired; onlineServers=1 2023-07-23 21:10:57,962 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41449,1690146654570 2023-07-23 21:10:57,962 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:10:57,963 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41449,1690146654570] 2023-07-23 21:10:57,963 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41449,1690146654570; numProcessing=3 2023-07-23 21:10:57,964 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41449,1690146654570 already deleted, retry=false 2023-07-23 21:10:57,965 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41449,1690146654570 expired; onlineServers=0 2023-07-23 21:10:57,965 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37565,1690146654356' ***** 2023-07-23 21:10:57,965 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 21:10:57,965 DEBUG [M:0;jenkins-hbase4:37565] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d695a5f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:10:57,965 INFO [M:0;jenkins-hbase4:37565] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:10:57,967 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 21:10:57,967 INFO [M:0;jenkins-hbase4:37565] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7cc32677{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:10:57,967 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:10:57,968 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:57,968 INFO [M:0;jenkins-hbase4:37565] server.AbstractConnector(383): Stopped ServerConnector@5922bff8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:57,968 INFO [M:0;jenkins-hbase4:37565] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:10:57,969 INFO [M:0;jenkins-hbase4:37565] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3d7341f1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:10:57,969 INFO [M:0;jenkins-hbase4:37565] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@70dcb28a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir/,STOPPED} 2023-07-23 21:10:57,969 INFO [M:0;jenkins-hbase4:37565] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37565,1690146654356 2023-07-23 21:10:57,969 INFO [M:0;jenkins-hbase4:37565] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37565,1690146654356; all regions closed. 2023-07-23 21:10:57,969 DEBUG [M:0;jenkins-hbase4:37565] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:10:57,969 INFO [M:0;jenkins-hbase4:37565] master.HMaster(1491): Stopping master jetty server 2023-07-23 21:10:57,970 INFO [M:0;jenkins-hbase4:37565] server.AbstractConnector(383): Stopped ServerConnector@7528cdf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:10:57,970 DEBUG [M:0;jenkins-hbase4:37565] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 21:10:57,970 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 21:10:57,970 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146655323] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146655323,5,FailOnTimeoutGroup] 2023-07-23 21:10:57,970 DEBUG [M:0;jenkins-hbase4:37565] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 21:10:57,970 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146655318] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146655318,5,FailOnTimeoutGroup] 2023-07-23 21:10:57,972 INFO [M:0;jenkins-hbase4:37565] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 21:10:57,972 INFO [M:0;jenkins-hbase4:37565] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 21:10:57,972 INFO [M:0;jenkins-hbase4:37565] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:10:57,972 DEBUG [M:0;jenkins-hbase4:37565] master.HMaster(1512): Stopping service threads 2023-07-23 21:10:57,972 INFO [M:0;jenkins-hbase4:37565] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 21:10:57,973 ERROR [M:0;jenkins-hbase4:37565] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-23 21:10:57,973 INFO [M:0;jenkins-hbase4:37565] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 21:10:57,973 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 21:10:57,973 DEBUG [M:0;jenkins-hbase4:37565] zookeeper.ZKUtil(398): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 21:10:57,973 WARN [M:0;jenkins-hbase4:37565] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 21:10:57,973 INFO [M:0;jenkins-hbase4:37565] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 21:10:57,974 INFO [M:0;jenkins-hbase4:37565] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 21:10:57,974 DEBUG [M:0;jenkins-hbase4:37565] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:10:57,974 INFO [M:0;jenkins-hbase4:37565] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:57,974 DEBUG [M:0;jenkins-hbase4:37565] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:57,974 DEBUG [M:0;jenkins-hbase4:37565] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:10:57,974 DEBUG [M:0;jenkins-hbase4:37565] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:57,974 INFO [M:0;jenkins-hbase4:37565] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.96 KB heapSize=109.12 KB 2023-07-23 21:10:57,991 INFO [M:0;jenkins-hbase4:37565] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.96 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/00e46368a295408daa7c371a6b1ea91c 2023-07-23 21:10:57,997 DEBUG [M:0;jenkins-hbase4:37565] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/00e46368a295408daa7c371a6b1ea91c as hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/00e46368a295408daa7c371a6b1ea91c 2023-07-23 21:10:58,002 INFO [M:0;jenkins-hbase4:37565] regionserver.HStore(1080): Added hdfs://localhost:39917/user/jenkins/test-data/695c4eb2-2b0a-d02e-ae45-cd07cb82c874/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/00e46368a295408daa7c371a6b1ea91c, entries=24, sequenceid=194, filesize=12.4 K 2023-07-23 21:10:58,003 INFO [M:0;jenkins-hbase4:37565] regionserver.HRegion(2948): Finished flush of dataSize ~92.96 KB/95194, heapSize ~109.10 KB/111720, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=194, compaction requested=false 2023-07-23 21:10:58,005 INFO [M:0;jenkins-hbase4:37565] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:10:58,005 DEBUG [M:0;jenkins-hbase4:37565] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:10:58,010 INFO [M:0;jenkins-hbase4:37565] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 21:10:58,010 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:10:58,011 INFO [M:0;jenkins-hbase4:37565] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37565 2023-07-23 21:10:58,013 DEBUG [M:0;jenkins-hbase4:37565] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37565,1690146654356 already deleted, retry=false 2023-07-23 21:10:58,311 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:58,311 INFO [M:0;jenkins-hbase4:37565] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37565,1690146654356; zookeeper connection closed. 2023-07-23 21:10:58,311 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): master:37565-0x1019405cb610000, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:58,411 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:58,411 INFO [RS:0;jenkins-hbase4:41449] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41449,1690146654570; zookeeper connection closed. 2023-07-23 21:10:58,412 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:41449-0x1019405cb610001, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:58,413 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@633bc074] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@633bc074 2023-07-23 21:10:58,512 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:58,512 INFO [RS:2;jenkins-hbase4:37991] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37991,1690146654910; zookeeper connection closed. 2023-07-23 21:10:58,512 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:37991-0x1019405cb610003, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:58,512 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1f0fc83d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1f0fc83d 2023-07-23 21:10:58,612 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:58,612 INFO [RS:1;jenkins-hbase4:38003] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38003,1690146654748; zookeeper connection closed. 2023-07-23 21:10:58,612 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): regionserver:38003-0x1019405cb610002, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:10:58,612 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@61b61911] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@61b61911 2023-07-23 21:10:58,613 INFO [Listener at localhost/44181] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-23 21:10:58,613 WARN [Listener at localhost/44181] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:10:58,618 INFO [Listener at localhost/44181] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:10:58,723 WARN [BP-245715629-172.31.14.131-1690146653387 heartbeating to localhost/127.0.0.1:39917] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:10:58,724 WARN [BP-245715629-172.31.14.131-1690146653387 heartbeating to localhost/127.0.0.1:39917] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-245715629-172.31.14.131-1690146653387 (Datanode Uuid fa73450e-494d-470b-961a-686d307f3688) service to localhost/127.0.0.1:39917 2023-07-23 21:10:58,724 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602/dfs/data/data5/current/BP-245715629-172.31.14.131-1690146653387] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:58,725 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602/dfs/data/data6/current/BP-245715629-172.31.14.131-1690146653387] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:58,726 WARN [Listener at localhost/44181] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:10:58,729 INFO [Listener at localhost/44181] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:10:58,834 WARN [BP-245715629-172.31.14.131-1690146653387 heartbeating to localhost/127.0.0.1:39917] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:10:58,834 WARN [BP-245715629-172.31.14.131-1690146653387 heartbeating to localhost/127.0.0.1:39917] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-245715629-172.31.14.131-1690146653387 (Datanode Uuid a31f855b-dd01-47bd-944a-8731bc5d1293) service to localhost/127.0.0.1:39917 2023-07-23 21:10:58,839 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602/dfs/data/data3/current/BP-245715629-172.31.14.131-1690146653387] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:58,840 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602/dfs/data/data4/current/BP-245715629-172.31.14.131-1690146653387] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:58,842 WARN [Listener at localhost/44181] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:10:58,852 INFO [Listener at localhost/44181] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:10:58,957 WARN [BP-245715629-172.31.14.131-1690146653387 heartbeating to localhost/127.0.0.1:39917] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:10:58,957 WARN [BP-245715629-172.31.14.131-1690146653387 heartbeating to localhost/127.0.0.1:39917] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-245715629-172.31.14.131-1690146653387 (Datanode Uuid 330256fe-b3ac-4fa5-a8c4-f16d2b14ad89) service to localhost/127.0.0.1:39917 2023-07-23 21:10:58,958 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602/dfs/data/data1/current/BP-245715629-172.31.14.131-1690146653387] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:58,958 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/cluster_58a396c3-dfcd-6077-cb33-9aa44d659602/dfs/data/data2/current/BP-245715629-172.31.14.131-1690146653387] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:10:58,967 INFO [Listener at localhost/44181] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:10:59,085 INFO [Listener at localhost/44181] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 21:10:59,131 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-23 21:10:59,131 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 21:10:59,131 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.log.dir so I do NOT create it in target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb 2023-07-23 21:10:59,131 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7a798442-49e2-3aa4-b08b-f7544a077662/hadoop.tmp.dir so I do NOT create it in target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb 2023-07-23 21:10:59,131 INFO [Listener at localhost/44181] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d, deleteOnExit=true 2023-07-23 21:10:59,131 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 21:10:59,132 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/test.cache.data in system properties and HBase conf 2023-07-23 21:10:59,132 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 21:10:59,132 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir in system properties and HBase conf 2023-07-23 21:10:59,132 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 21:10:59,132 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 21:10:59,132 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 21:10:59,133 DEBUG [Listener at localhost/44181] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 21:10:59,133 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 21:10:59,133 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 21:10:59,133 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 21:10:59,133 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 21:10:59,134 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 21:10:59,134 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 21:10:59,134 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 21:10:59,134 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 21:10:59,134 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 21:10:59,135 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/nfs.dump.dir in system properties and HBase conf 2023-07-23 21:10:59,135 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir in system properties and HBase conf 2023-07-23 21:10:59,135 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 21:10:59,135 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 21:10:59,135 INFO [Listener at localhost/44181] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 21:10:59,146 WARN [Listener at localhost/44181] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 21:10:59,147 WARN [Listener at localhost/44181] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 21:10:59,182 DEBUG [Listener at localhost/44181-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1019405cb61000a, quorum=127.0.0.1:50825, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-23 21:10:59,183 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1019405cb61000a, quorum=127.0.0.1:50825, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-23 21:10:59,190 WARN [Listener at localhost/44181] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:59,192 INFO [Listener at localhost/44181] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:59,197 INFO [Listener at localhost/44181] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir/Jetty_localhost_40349_hdfs____sixxuu/webapp 2023-07-23 21:10:59,292 INFO [Listener at localhost/44181] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40349 2023-07-23 21:10:59,304 WARN [Listener at localhost/44181] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 21:10:59,304 WARN [Listener at localhost/44181] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 21:10:59,355 WARN [Listener at localhost/34653] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:59,376 WARN [Listener at localhost/34653] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:59,378 WARN [Listener at localhost/34653] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:59,379 INFO [Listener at localhost/34653] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:59,384 INFO [Listener at localhost/34653] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir/Jetty_localhost_35167_datanode____vein7b/webapp 2023-07-23 21:10:59,478 INFO [Listener at localhost/34653] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35167 2023-07-23 21:10:59,486 WARN [Listener at localhost/41269] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:59,505 WARN [Listener at localhost/41269] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:59,508 WARN [Listener at localhost/41269] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:59,509 INFO [Listener at localhost/41269] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:59,514 INFO [Listener at localhost/41269] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir/Jetty_localhost_45753_datanode____i1zgm3/webapp 2023-07-23 21:10:59,594 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4f9f2f1cdbcb029e: Processing first storage report for DS-de977aed-e6e5-4f60-b31a-4d80af4540ac from datanode 9440972b-810e-430d-add5-ba8c71400e68 2023-07-23 21:10:59,594 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4f9f2f1cdbcb029e: from storage DS-de977aed-e6e5-4f60-b31a-4d80af4540ac node DatanodeRegistration(127.0.0.1:39311, datanodeUuid=9440972b-810e-430d-add5-ba8c71400e68, infoPort=43747, infoSecurePort=0, ipcPort=41269, storageInfo=lv=-57;cid=testClusterID;nsid=1883394847;c=1690146659155), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:59,594 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4f9f2f1cdbcb029e: Processing first storage report for DS-9ea9bd38-d94c-4b22-b0fc-73dcc604f0f6 from datanode 9440972b-810e-430d-add5-ba8c71400e68 2023-07-23 21:10:59,594 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4f9f2f1cdbcb029e: from storage DS-9ea9bd38-d94c-4b22-b0fc-73dcc604f0f6 node DatanodeRegistration(127.0.0.1:39311, datanodeUuid=9440972b-810e-430d-add5-ba8c71400e68, infoPort=43747, infoSecurePort=0, ipcPort=41269, storageInfo=lv=-57;cid=testClusterID;nsid=1883394847;c=1690146659155), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:59,617 INFO [Listener at localhost/41269] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45753 2023-07-23 21:10:59,628 WARN [Listener at localhost/39621] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:59,642 WARN [Listener at localhost/39621] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 21:10:59,644 WARN [Listener at localhost/39621] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 21:10:59,646 INFO [Listener at localhost/39621] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 21:10:59,650 INFO [Listener at localhost/39621] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir/Jetty_localhost_46693_datanode____m9lfcl/webapp 2023-07-23 21:10:59,729 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x23517729bcfe6dda: Processing first storage report for DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c from datanode 7a82de0d-7528-46f5-b355-484990e6653c 2023-07-23 21:10:59,729 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x23517729bcfe6dda: from storage DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c node DatanodeRegistration(127.0.0.1:42289, datanodeUuid=7a82de0d-7528-46f5-b355-484990e6653c, infoPort=37225, infoSecurePort=0, ipcPort=39621, storageInfo=lv=-57;cid=testClusterID;nsid=1883394847;c=1690146659155), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:59,729 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x23517729bcfe6dda: Processing first storage report for DS-9ce55476-407e-4c09-9651-94a1c94f28ea from datanode 7a82de0d-7528-46f5-b355-484990e6653c 2023-07-23 21:10:59,729 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x23517729bcfe6dda: from storage DS-9ce55476-407e-4c09-9651-94a1c94f28ea node DatanodeRegistration(127.0.0.1:42289, datanodeUuid=7a82de0d-7528-46f5-b355-484990e6653c, infoPort=37225, infoSecurePort=0, ipcPort=39621, storageInfo=lv=-57;cid=testClusterID;nsid=1883394847;c=1690146659155), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:59,753 INFO [Listener at localhost/39621] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46693 2023-07-23 21:10:59,761 WARN [Listener at localhost/39849] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 21:10:59,858 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x36ab7fbf3842c002: Processing first storage report for DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b from datanode 70878726-e465-42c4-9ad1-f06692ccbfe7 2023-07-23 21:10:59,859 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x36ab7fbf3842c002: from storage DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b node DatanodeRegistration(127.0.0.1:46529, datanodeUuid=70878726-e465-42c4-9ad1-f06692ccbfe7, infoPort=46817, infoSecurePort=0, ipcPort=39849, storageInfo=lv=-57;cid=testClusterID;nsid=1883394847;c=1690146659155), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:59,859 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x36ab7fbf3842c002: Processing first storage report for DS-c41c2242-1f32-4961-89e8-e76576896ecc from datanode 70878726-e465-42c4-9ad1-f06692ccbfe7 2023-07-23 21:10:59,859 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x36ab7fbf3842c002: from storage DS-c41c2242-1f32-4961-89e8-e76576896ecc node DatanodeRegistration(127.0.0.1:46529, datanodeUuid=70878726-e465-42c4-9ad1-f06692ccbfe7, infoPort=46817, infoSecurePort=0, ipcPort=39849, storageInfo=lv=-57;cid=testClusterID;nsid=1883394847;c=1690146659155), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 21:10:59,868 DEBUG [Listener at localhost/39849] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb 2023-07-23 21:10:59,870 INFO [Listener at localhost/39849] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/zookeeper_0, clientPort=64936, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 21:10:59,871 INFO [Listener at localhost/39849] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64936 2023-07-23 21:10:59,871 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:59,872 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:59,890 INFO [Listener at localhost/39849] util.FSUtils(471): Created version file at hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2 with version=8 2023-07-23 21:10:59,890 INFO [Listener at localhost/39849] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:46635/user/jenkins/test-data/8f380e4c-e0e1-260a-1f06-44b5afb2541a/hbase-staging 2023-07-23 21:10:59,891 DEBUG [Listener at localhost/39849] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 21:10:59,891 DEBUG [Listener at localhost/39849] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 21:10:59,891 DEBUG [Listener at localhost/39849] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 21:10:59,891 DEBUG [Listener at localhost/39849] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 21:10:59,892 INFO [Listener at localhost/39849] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:10:59,892 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:59,892 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:59,892 INFO [Listener at localhost/39849] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:10:59,892 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:10:59,892 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:10:59,893 INFO [Listener at localhost/39849] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:10:59,893 INFO [Listener at localhost/39849] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44239 2023-07-23 21:10:59,894 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:59,895 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:10:59,896 INFO [Listener at localhost/39849] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44239 connecting to ZooKeeper ensemble=127.0.0.1:64936 2023-07-23 21:10:59,903 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:442390x0, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:10:59,904 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44239-0x1019405e1210000 connected 2023-07-23 21:10:59,921 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:10:59,921 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:10:59,921 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:10:59,922 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44239 2023-07-23 21:10:59,925 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44239 2023-07-23 21:10:59,926 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44239 2023-07-23 21:10:59,927 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44239 2023-07-23 21:10:59,927 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44239 2023-07-23 21:10:59,929 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:10:59,929 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:10:59,929 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:10:59,929 INFO [Listener at localhost/39849] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 21:10:59,930 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:10:59,930 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:10:59,930 INFO [Listener at localhost/39849] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:10:59,930 INFO [Listener at localhost/39849] http.HttpServer(1146): Jetty bound to port 45883 2023-07-23 21:10:59,930 INFO [Listener at localhost/39849] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:10:59,935 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:59,935 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f61349d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:10:59,935 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:10:59,935 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4a8135f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:00,050 INFO [Listener at localhost/39849] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:00,051 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:00,052 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:00,052 INFO [Listener at localhost/39849] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:11:00,054 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,055 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@28f19164{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir/jetty-0_0_0_0-45883-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8329484946326516409/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:11:00,056 INFO [Listener at localhost/39849] server.AbstractConnector(333): Started ServerConnector@14cf9bff{HTTP/1.1, (http/1.1)}{0.0.0.0:45883} 2023-07-23 21:11:00,056 INFO [Listener at localhost/39849] server.Server(415): Started @38490ms 2023-07-23 21:11:00,056 INFO [Listener at localhost/39849] master.HMaster(444): hbase.rootdir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2, hbase.cluster.distributed=false 2023-07-23 21:11:00,069 INFO [Listener at localhost/39849] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:00,070 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:00,070 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:00,070 INFO [Listener at localhost/39849] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:00,070 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:00,070 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:00,070 INFO [Listener at localhost/39849] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:00,071 INFO [Listener at localhost/39849] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39709 2023-07-23 21:11:00,071 INFO [Listener at localhost/39849] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:00,072 DEBUG [Listener at localhost/39849] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:00,073 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:00,074 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:00,075 INFO [Listener at localhost/39849] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39709 connecting to ZooKeeper ensemble=127.0.0.1:64936 2023-07-23 21:11:00,080 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:397090x0, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:00,082 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39709-0x1019405e1210001 connected 2023-07-23 21:11:00,082 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:00,082 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:00,083 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:00,085 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39709 2023-07-23 21:11:00,085 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39709 2023-07-23 21:11:00,086 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39709 2023-07-23 21:11:00,086 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39709 2023-07-23 21:11:00,086 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39709 2023-07-23 21:11:00,088 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:00,088 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:00,088 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:00,089 INFO [Listener at localhost/39849] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:00,089 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:00,089 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:00,089 INFO [Listener at localhost/39849] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:00,089 INFO [Listener at localhost/39849] http.HttpServer(1146): Jetty bound to port 36751 2023-07-23 21:11:00,090 INFO [Listener at localhost/39849] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:00,095 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,095 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@8c35413{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:00,095 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,095 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@63c30705{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:00,210 INFO [Listener at localhost/39849] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:00,211 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:00,211 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:00,212 INFO [Listener at localhost/39849] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 21:11:00,212 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,213 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@70a8bb75{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir/jetty-0_0_0_0-36751-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8049445195120686816/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:00,214 INFO [Listener at localhost/39849] server.AbstractConnector(333): Started ServerConnector@643c73c2{HTTP/1.1, (http/1.1)}{0.0.0.0:36751} 2023-07-23 21:11:00,215 INFO [Listener at localhost/39849] server.Server(415): Started @38648ms 2023-07-23 21:11:00,227 INFO [Listener at localhost/39849] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:00,227 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:00,227 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:00,227 INFO [Listener at localhost/39849] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:00,228 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:00,228 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:00,228 INFO [Listener at localhost/39849] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:00,229 INFO [Listener at localhost/39849] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46219 2023-07-23 21:11:00,229 INFO [Listener at localhost/39849] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:00,230 DEBUG [Listener at localhost/39849] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:00,231 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:00,232 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:00,233 INFO [Listener at localhost/39849] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46219 connecting to ZooKeeper ensemble=127.0.0.1:64936 2023-07-23 21:11:00,237 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:462190x0, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:00,238 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46219-0x1019405e1210002 connected 2023-07-23 21:11:00,238 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:00,239 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:00,239 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:00,240 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46219 2023-07-23 21:11:00,240 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46219 2023-07-23 21:11:00,241 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46219 2023-07-23 21:11:00,242 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46219 2023-07-23 21:11:00,243 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46219 2023-07-23 21:11:00,245 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:00,245 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:00,245 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:00,246 INFO [Listener at localhost/39849] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:00,246 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:00,246 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:00,246 INFO [Listener at localhost/39849] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:00,247 INFO [Listener at localhost/39849] http.HttpServer(1146): Jetty bound to port 36979 2023-07-23 21:11:00,247 INFO [Listener at localhost/39849] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:00,251 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,251 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1829ee92{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:00,252 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,252 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e12c56a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:00,369 INFO [Listener at localhost/39849] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:00,412 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:00,415 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:00,416 INFO [Listener at localhost/39849] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:11:00,422 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,423 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1e542859{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir/jetty-0_0_0_0-36979-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7341996359682584949/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:00,424 INFO [Listener at localhost/39849] server.AbstractConnector(333): Started ServerConnector@846bf45{HTTP/1.1, (http/1.1)}{0.0.0.0:36979} 2023-07-23 21:11:00,425 INFO [Listener at localhost/39849] server.Server(415): Started @38858ms 2023-07-23 21:11:00,436 INFO [Listener at localhost/39849] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:00,436 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:00,437 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:00,437 INFO [Listener at localhost/39849] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:00,437 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:00,437 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:00,437 INFO [Listener at localhost/39849] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:00,437 INFO [Listener at localhost/39849] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46839 2023-07-23 21:11:00,438 INFO [Listener at localhost/39849] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:00,439 DEBUG [Listener at localhost/39849] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:00,439 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:00,440 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:00,441 INFO [Listener at localhost/39849] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46839 connecting to ZooKeeper ensemble=127.0.0.1:64936 2023-07-23 21:11:00,444 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:468390x0, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:00,446 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:468390x0, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:00,446 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46839-0x1019405e1210003 connected 2023-07-23 21:11:00,446 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:00,447 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:00,447 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46839 2023-07-23 21:11:00,447 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46839 2023-07-23 21:11:00,448 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46839 2023-07-23 21:11:00,448 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46839 2023-07-23 21:11:00,448 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46839 2023-07-23 21:11:00,449 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:00,450 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:00,450 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:00,450 INFO [Listener at localhost/39849] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:00,450 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:00,450 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:00,450 INFO [Listener at localhost/39849] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:00,451 INFO [Listener at localhost/39849] http.HttpServer(1146): Jetty bound to port 35207 2023-07-23 21:11:00,451 INFO [Listener at localhost/39849] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:00,453 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,454 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@26b218ed{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:00,454 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,454 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c18362f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:00,574 INFO [Listener at localhost/39849] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:00,575 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:00,575 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:00,575 INFO [Listener at localhost/39849] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:11:00,576 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:00,576 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4128c998{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir/jetty-0_0_0_0-35207-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6181974505292589212/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:00,578 INFO [Listener at localhost/39849] server.AbstractConnector(333): Started ServerConnector@7dc44ac0{HTTP/1.1, (http/1.1)}{0.0.0.0:35207} 2023-07-23 21:11:00,578 INFO [Listener at localhost/39849] server.Server(415): Started @39012ms 2023-07-23 21:11:00,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:00,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1a6d3c87{HTTP/1.1, (http/1.1)}{0.0.0.0:42635} 2023-07-23 21:11:00,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @39017ms 2023-07-23 21:11:00,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:00,584 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:11:00,585 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:00,587 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:00,587 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:00,587 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:00,587 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:00,587 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:00,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:11:00,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44239,1690146659892 from backup master directory 2023-07-23 21:11:00,591 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:11:00,592 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:00,592 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 21:11:00,592 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:00,592 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:00,605 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/hbase.id with ID: 3155c502-c356-484b-8600-1021b61e1046 2023-07-23 21:11:00,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:00,619 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:00,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4ecbda38 to 127.0.0.1:64936 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:00,633 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63017ae1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:00,633 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:11:00,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 21:11:00,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:00,635 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/data/master/store-tmp 2023-07-23 21:11:00,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:00,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:11:00,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:00,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:00,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:11:00,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:00,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:00,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:11:00,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/WALs/jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:00,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44239%2C1690146659892, suffix=, logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/WALs/jenkins-hbase4.apache.org,44239,1690146659892, archiveDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/oldWALs, maxLogs=10 2023-07-23 21:11:00,670 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK] 2023-07-23 21:11:00,670 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK] 2023-07-23 21:11:00,670 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK] 2023-07-23 21:11:00,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/WALs/jenkins-hbase4.apache.org,44239,1690146659892/jenkins-hbase4.apache.org%2C44239%2C1690146659892.1690146660649 2023-07-23 21:11:00,673 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK], DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK], DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK]] 2023-07-23 21:11:00,673 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:00,673 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:00,673 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:00,673 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:00,675 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:00,676 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 21:11:00,676 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 21:11:00,677 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:00,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:00,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:00,680 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 21:11:00,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:11:00,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11598678880, jitterRate=0.08021114766597748}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:00,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:11:00,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 21:11:00,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 21:11:00,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 21:11:00,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 21:11:00,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-23 21:11:00,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-23 21:11:00,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 21:11:00,686 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 21:11:00,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 21:11:00,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 21:11:00,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 21:11:00,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 21:11:00,692 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:00,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 21:11:00,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 21:11:00,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 21:11:00,695 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:00,695 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:00,695 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:00,695 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:00,695 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:00,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44239,1690146659892, sessionid=0x1019405e1210000, setting cluster-up flag (Was=false) 2023-07-23 21:11:00,700 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:00,705 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 21:11:00,705 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:00,710 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:00,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 21:11:00,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:00,715 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.hbase-snapshot/.tmp 2023-07-23 21:11:00,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 21:11:00,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 21:11:00,716 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 21:11:00,717 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:00,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 21:11:00,718 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 21:11:00,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:11:00,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:11:00,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 21:11:00,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 21:11:00,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:00,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:00,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:00,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 21:11:00,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 21:11:00,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:00,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690146690732 2023-07-23 21:11:00,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 21:11:00,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 21:11:00,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 21:11:00,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 21:11:00,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 21:11:00,732 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 21:11:00,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 21:11:00,733 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 21:11:00,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 21:11:00,733 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 21:11:00,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 21:11:00,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 21:11:00,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 21:11:00,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146660734,5,FailOnTimeoutGroup] 2023-07-23 21:11:00,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146660734,5,FailOnTimeoutGroup] 2023-07-23 21:11:00,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 21:11:00,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,735 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 21:11:00,744 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 21:11:00,745 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 21:11:00,745 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2 2023-07-23 21:11:00,757 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:00,758 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:11:00,760 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/info 2023-07-23 21:11:00,760 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:11:00,760 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:00,761 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:11:00,762 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:11:00,762 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:11:00,763 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:00,763 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:11:00,764 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/table 2023-07-23 21:11:00,764 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:11:00,765 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:00,765 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740 2023-07-23 21:11:00,766 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740 2023-07-23 21:11:00,768 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:11:00,769 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:11:00,770 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:11:00,771 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10791518240, jitterRate=0.005038455128669739}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:11:00,771 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:11:00,771 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:11:00,771 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:11:00,771 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:11:00,771 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:11:00,771 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:11:00,774 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:11:00,774 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:11:00,775 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 21:11:00,775 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 21:11:00,775 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 21:11:00,775 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 21:11:00,777 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 21:11:00,780 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(951): ClusterId : 3155c502-c356-484b-8600-1021b61e1046 2023-07-23 21:11:00,780 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(951): ClusterId : 3155c502-c356-484b-8600-1021b61e1046 2023-07-23 21:11:00,780 DEBUG [RS:0;jenkins-hbase4:39709] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:00,780 INFO [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(951): ClusterId : 3155c502-c356-484b-8600-1021b61e1046 2023-07-23 21:11:00,780 DEBUG [RS:1;jenkins-hbase4:46219] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:00,780 DEBUG [RS:2;jenkins-hbase4:46839] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:00,783 DEBUG [RS:1;jenkins-hbase4:46219] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:00,783 DEBUG [RS:1;jenkins-hbase4:46219] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:00,783 DEBUG [RS:0;jenkins-hbase4:39709] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:00,783 DEBUG [RS:0;jenkins-hbase4:39709] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:00,783 DEBUG [RS:2;jenkins-hbase4:46839] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:00,784 DEBUG [RS:2;jenkins-hbase4:46839] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:00,785 DEBUG [RS:1;jenkins-hbase4:46219] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:00,787 DEBUG [RS:1;jenkins-hbase4:46219] zookeeper.ReadOnlyZKClient(139): Connect 0x4fd90d42 to 127.0.0.1:64936 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:00,787 DEBUG [RS:2;jenkins-hbase4:46839] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:00,787 DEBUG [RS:0;jenkins-hbase4:39709] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:00,791 DEBUG [RS:2;jenkins-hbase4:46839] zookeeper.ReadOnlyZKClient(139): Connect 0x0ab080cb to 127.0.0.1:64936 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:00,791 DEBUG [RS:0;jenkins-hbase4:39709] zookeeper.ReadOnlyZKClient(139): Connect 0x785ea930 to 127.0.0.1:64936 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:00,797 DEBUG [RS:1;jenkins-hbase4:46219] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@411eeac9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:00,799 DEBUG [RS:1;jenkins-hbase4:46219] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45216632, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:00,801 DEBUG [RS:2;jenkins-hbase4:46839] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c0d0e80, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:00,801 DEBUG [RS:2;jenkins-hbase4:46839] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b70cdfb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:00,807 DEBUG [RS:0;jenkins-hbase4:39709] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d5c285f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:00,807 DEBUG [RS:0;jenkins-hbase4:39709] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@792c5976, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:00,810 DEBUG [RS:1;jenkins-hbase4:46219] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46219 2023-07-23 21:11:00,810 INFO [RS:1;jenkins-hbase4:46219] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:00,810 INFO [RS:1;jenkins-hbase4:46219] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:00,810 DEBUG [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:00,810 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44239,1690146659892 with isa=jenkins-hbase4.apache.org/172.31.14.131:46219, startcode=1690146660226 2023-07-23 21:11:00,811 DEBUG [RS:1;jenkins-hbase4:46219] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:00,811 DEBUG [RS:2;jenkins-hbase4:46839] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:46839 2023-07-23 21:11:00,811 INFO [RS:2;jenkins-hbase4:46839] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:00,811 INFO [RS:2;jenkins-hbase4:46839] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:00,811 DEBUG [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:00,811 INFO [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44239,1690146659892 with isa=jenkins-hbase4.apache.org/172.31.14.131:46839, startcode=1690146660436 2023-07-23 21:11:00,811 DEBUG [RS:2;jenkins-hbase4:46839] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:00,812 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54715, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:00,814 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44239] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:00,814 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:00,814 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50717, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:00,814 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 21:11:00,815 DEBUG [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2 2023-07-23 21:11:00,815 DEBUG [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34653 2023-07-23 21:11:00,815 DEBUG [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45883 2023-07-23 21:11:00,815 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44239] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:00,815 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:00,815 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 21:11:00,815 DEBUG [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2 2023-07-23 21:11:00,815 DEBUG [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34653 2023-07-23 21:11:00,815 DEBUG [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45883 2023-07-23 21:11:00,816 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:00,818 DEBUG [RS:0;jenkins-hbase4:39709] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39709 2023-07-23 21:11:00,818 INFO [RS:0;jenkins-hbase4:39709] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:00,818 INFO [RS:0;jenkins-hbase4:39709] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:00,818 DEBUG [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:00,821 DEBUG [RS:1;jenkins-hbase4:46219] zookeeper.ZKUtil(162): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:00,821 WARN [RS:1;jenkins-hbase4:46219] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:00,821 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46219,1690146660226] 2023-07-23 21:11:00,821 INFO [RS:1;jenkins-hbase4:46219] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:00,822 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46839,1690146660436] 2023-07-23 21:11:00,822 DEBUG [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:00,821 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44239,1690146659892 with isa=jenkins-hbase4.apache.org/172.31.14.131:39709, startcode=1690146660069 2023-07-23 21:11:00,822 DEBUG [RS:2;jenkins-hbase4:46839] zookeeper.ZKUtil(162): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:00,822 DEBUG [RS:0;jenkins-hbase4:39709] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:00,822 WARN [RS:2;jenkins-hbase4:46839] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:00,822 INFO [RS:2;jenkins-hbase4:46839] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:00,822 DEBUG [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:00,824 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42257, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:00,825 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44239] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:00,826 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:00,826 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 21:11:00,826 DEBUG [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2 2023-07-23 21:11:00,826 DEBUG [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34653 2023-07-23 21:11:00,826 DEBUG [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45883 2023-07-23 21:11:00,827 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:00,828 DEBUG [RS:1;jenkins-hbase4:46219] zookeeper.ZKUtil(162): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:00,828 DEBUG [RS:0;jenkins-hbase4:39709] zookeeper.ZKUtil(162): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:00,828 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39709,1690146660069] 2023-07-23 21:11:00,828 DEBUG [RS:2;jenkins-hbase4:46839] zookeeper.ZKUtil(162): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:00,828 WARN [RS:0;jenkins-hbase4:39709] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:00,828 DEBUG [RS:1;jenkins-hbase4:46219] zookeeper.ZKUtil(162): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:00,828 INFO [RS:0;jenkins-hbase4:39709] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:00,828 DEBUG [RS:2;jenkins-hbase4:46839] zookeeper.ZKUtil(162): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:00,828 DEBUG [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:00,828 DEBUG [RS:1;jenkins-hbase4:46219] zookeeper.ZKUtil(162): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:00,829 DEBUG [RS:2;jenkins-hbase4:46839] zookeeper.ZKUtil(162): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:00,832 DEBUG [RS:1;jenkins-hbase4:46219] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:00,832 INFO [RS:1;jenkins-hbase4:46219] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:00,833 DEBUG [RS:2;jenkins-hbase4:46839] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:00,834 DEBUG [RS:0;jenkins-hbase4:39709] zookeeper.ZKUtil(162): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:00,834 INFO [RS:2;jenkins-hbase4:46839] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:00,834 INFO [RS:1;jenkins-hbase4:46219] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:00,834 DEBUG [RS:0;jenkins-hbase4:39709] zookeeper.ZKUtil(162): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:00,834 DEBUG [RS:0;jenkins-hbase4:39709] zookeeper.ZKUtil(162): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:00,835 DEBUG [RS:0;jenkins-hbase4:39709] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:00,835 INFO [RS:0;jenkins-hbase4:39709] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:00,838 INFO [RS:1;jenkins-hbase4:46219] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:00,839 INFO [RS:0;jenkins-hbase4:39709] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:00,839 INFO [RS:1;jenkins-hbase4:46219] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,839 INFO [RS:2;jenkins-hbase4:46839] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:00,839 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:00,840 INFO [RS:0;jenkins-hbase4:39709] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:00,840 INFO [RS:0;jenkins-hbase4:39709] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,840 INFO [RS:2;jenkins-hbase4:46839] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:00,840 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:00,840 INFO [RS:2;jenkins-hbase4:46839] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,841 INFO [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:00,842 INFO [RS:1;jenkins-hbase4:46219] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,842 INFO [RS:0;jenkins-hbase4:39709] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,842 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,842 INFO [RS:2;jenkins-hbase4:46839] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,842 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:00,842 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:00,843 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:1;jenkins-hbase4:46219] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:0;jenkins-hbase4:39709] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,843 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,844 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,844 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:00,844 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,844 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,844 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,844 DEBUG [RS:2;jenkins-hbase4:46839] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:00,850 INFO [RS:1;jenkins-hbase4:46219] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,851 INFO [RS:1;jenkins-hbase4:46219] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,851 INFO [RS:1;jenkins-hbase4:46219] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,851 INFO [RS:2;jenkins-hbase4:46839] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,855 INFO [RS:2;jenkins-hbase4:46839] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,855 INFO [RS:2;jenkins-hbase4:46839] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,857 INFO [RS:0;jenkins-hbase4:39709] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,858 INFO [RS:0;jenkins-hbase4:39709] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,858 INFO [RS:0;jenkins-hbase4:39709] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,870 INFO [RS:2;jenkins-hbase4:46839] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:00,870 INFO [RS:1;jenkins-hbase4:46219] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:00,870 INFO [RS:2;jenkins-hbase4:46839] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46839,1690146660436-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,870 INFO [RS:1;jenkins-hbase4:46219] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46219,1690146660226-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,877 INFO [RS:0;jenkins-hbase4:39709] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:00,877 INFO [RS:0;jenkins-hbase4:39709] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39709,1690146660069-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:00,881 INFO [RS:1;jenkins-hbase4:46219] regionserver.Replication(203): jenkins-hbase4.apache.org,46219,1690146660226 started 2023-07-23 21:11:00,881 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46219,1690146660226, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46219, sessionid=0x1019405e1210002 2023-07-23 21:11:00,881 INFO [RS:2;jenkins-hbase4:46839] regionserver.Replication(203): jenkins-hbase4.apache.org,46839,1690146660436 started 2023-07-23 21:11:00,881 DEBUG [RS:1;jenkins-hbase4:46219] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:00,881 INFO [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46839,1690146660436, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46839, sessionid=0x1019405e1210003 2023-07-23 21:11:00,881 DEBUG [RS:1;jenkins-hbase4:46219] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:00,881 DEBUG [RS:2;jenkins-hbase4:46839] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:00,881 DEBUG [RS:2;jenkins-hbase4:46839] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:00,881 DEBUG [RS:2;jenkins-hbase4:46839] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46839,1690146660436' 2023-07-23 21:11:00,881 DEBUG [RS:2;jenkins-hbase4:46839] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:00,881 DEBUG [RS:1;jenkins-hbase4:46219] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46219,1690146660226' 2023-07-23 21:11:00,882 DEBUG [RS:1;jenkins-hbase4:46219] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:00,882 DEBUG [RS:2;jenkins-hbase4:46839] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:00,882 DEBUG [RS:1;jenkins-hbase4:46219] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:00,882 DEBUG [RS:2;jenkins-hbase4:46839] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:00,882 DEBUG [RS:1;jenkins-hbase4:46219] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:00,882 DEBUG [RS:1;jenkins-hbase4:46219] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:00,882 DEBUG [RS:1;jenkins-hbase4:46219] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:00,882 DEBUG [RS:1;jenkins-hbase4:46219] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46219,1690146660226' 2023-07-23 21:11:00,882 DEBUG [RS:1;jenkins-hbase4:46219] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:00,882 DEBUG [RS:2;jenkins-hbase4:46839] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:00,882 DEBUG [RS:2;jenkins-hbase4:46839] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:00,883 DEBUG [RS:2;jenkins-hbase4:46839] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46839,1690146660436' 2023-07-23 21:11:00,883 DEBUG [RS:2;jenkins-hbase4:46839] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:00,883 DEBUG [RS:1;jenkins-hbase4:46219] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:00,883 DEBUG [RS:2;jenkins-hbase4:46839] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:00,883 DEBUG [RS:1;jenkins-hbase4:46219] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:00,883 INFO [RS:1;jenkins-hbase4:46219] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:11:00,883 INFO [RS:1;jenkins-hbase4:46219] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:11:00,883 DEBUG [RS:2;jenkins-hbase4:46839] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:00,884 INFO [RS:2;jenkins-hbase4:46839] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:11:00,884 INFO [RS:2;jenkins-hbase4:46839] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:11:00,892 INFO [RS:0;jenkins-hbase4:39709] regionserver.Replication(203): jenkins-hbase4.apache.org,39709,1690146660069 started 2023-07-23 21:11:00,892 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39709,1690146660069, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39709, sessionid=0x1019405e1210001 2023-07-23 21:11:00,894 DEBUG [RS:0;jenkins-hbase4:39709] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:00,894 DEBUG [RS:0;jenkins-hbase4:39709] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:00,894 DEBUG [RS:0;jenkins-hbase4:39709] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39709,1690146660069' 2023-07-23 21:11:00,894 DEBUG [RS:0;jenkins-hbase4:39709] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:00,895 DEBUG [RS:0;jenkins-hbase4:39709] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:00,895 DEBUG [RS:0;jenkins-hbase4:39709] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:00,895 DEBUG [RS:0;jenkins-hbase4:39709] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:00,895 DEBUG [RS:0;jenkins-hbase4:39709] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:00,895 DEBUG [RS:0;jenkins-hbase4:39709] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39709,1690146660069' 2023-07-23 21:11:00,895 DEBUG [RS:0;jenkins-hbase4:39709] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:00,895 DEBUG [RS:0;jenkins-hbase4:39709] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:00,896 DEBUG [RS:0;jenkins-hbase4:39709] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:00,896 INFO [RS:0;jenkins-hbase4:39709] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:11:00,896 INFO [RS:0;jenkins-hbase4:39709] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:11:00,927 DEBUG [jenkins-hbase4:44239] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 21:11:00,927 DEBUG [jenkins-hbase4:44239] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:00,927 DEBUG [jenkins-hbase4:44239] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:00,927 DEBUG [jenkins-hbase4:44239] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:00,928 DEBUG [jenkins-hbase4:44239] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:00,928 DEBUG [jenkins-hbase4:44239] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:11:00,929 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39709,1690146660069, state=OPENING 2023-07-23 21:11:00,930 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 21:11:00,931 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:00,932 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39709,1690146660069}] 2023-07-23 21:11:00,932 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:11:00,985 INFO [RS:1;jenkins-hbase4:46219] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46219%2C1690146660226, suffix=, logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,46219,1690146660226, archiveDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs, maxLogs=32 2023-07-23 21:11:00,985 INFO [RS:2;jenkins-hbase4:46839] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46839%2C1690146660436, suffix=, logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,46839,1690146660436, archiveDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs, maxLogs=32 2023-07-23 21:11:00,999 INFO [RS:0;jenkins-hbase4:39709] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39709%2C1690146660069, suffix=, logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,39709,1690146660069, archiveDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs, maxLogs=32 2023-07-23 21:11:01,001 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK] 2023-07-23 21:11:01,001 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK] 2023-07-23 21:11:01,001 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK] 2023-07-23 21:11:01,003 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK] 2023-07-23 21:11:01,004 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK] 2023-07-23 21:11:01,004 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK] 2023-07-23 21:11:01,014 INFO [RS:2;jenkins-hbase4:46839] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,46839,1690146660436/jenkins-hbase4.apache.org%2C46839%2C1690146660436.1690146660986 2023-07-23 21:11:01,015 INFO [RS:1;jenkins-hbase4:46219] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,46219,1690146660226/jenkins-hbase4.apache.org%2C46219%2C1690146660226.1690146660986 2023-07-23 21:11:01,017 DEBUG [RS:2;jenkins-hbase4:46839] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK], DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK], DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK]] 2023-07-23 21:11:01,018 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK] 2023-07-23 21:11:01,018 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK] 2023-07-23 21:11:01,018 DEBUG [RS:1;jenkins-hbase4:46219] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK], DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK], DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK]] 2023-07-23 21:11:01,018 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK] 2023-07-23 21:11:01,022 INFO [RS:0;jenkins-hbase4:39709] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,39709,1690146660069/jenkins-hbase4.apache.org%2C39709%2C1690146660069.1690146661000 2023-07-23 21:11:01,023 WARN [ReadOnlyZKClient-127.0.0.1:64936@0x4ecbda38] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-23 21:11:01,023 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44239,1690146659892] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:01,026 DEBUG [RS:0;jenkins-hbase4:39709] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK], DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK], DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK]] 2023-07-23 21:11:01,027 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49936, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:01,027 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39709] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:49936 deadline: 1690146721027, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:01,086 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:01,088 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:11:01,089 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49952, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:11:01,093 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 21:11:01,093 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:01,095 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39709%2C1690146660069.meta, suffix=.meta, logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,39709,1690146660069, archiveDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs, maxLogs=32 2023-07-23 21:11:01,110 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK] 2023-07-23 21:11:01,110 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK] 2023-07-23 21:11:01,111 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK] 2023-07-23 21:11:01,113 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,39709,1690146660069/jenkins-hbase4.apache.org%2C39709%2C1690146660069.meta.1690146661095.meta 2023-07-23 21:11:01,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK], DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK], DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK]] 2023-07-23 21:11:01,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:01,113 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:11:01,114 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 21:11:01,114 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 21:11:01,114 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 21:11:01,114 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:01,114 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 21:11:01,114 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 21:11:01,119 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 21:11:01,120 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/info 2023-07-23 21:11:01,120 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/info 2023-07-23 21:11:01,120 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 21:11:01,120 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:01,121 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 21:11:01,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:11:01,121 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/rep_barrier 2023-07-23 21:11:01,122 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 21:11:01,122 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:01,122 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 21:11:01,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/table 2023-07-23 21:11:01,123 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/table 2023-07-23 21:11:01,123 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 21:11:01,124 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:01,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740 2023-07-23 21:11:01,125 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740 2023-07-23 21:11:01,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 21:11:01,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 21:11:01,129 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10598135200, jitterRate=-0.012971743941307068}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 21:11:01,129 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 21:11:01,130 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690146661086 2023-07-23 21:11:01,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 21:11:01,134 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 21:11:01,135 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39709,1690146660069, state=OPEN 2023-07-23 21:11:01,137 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 21:11:01,137 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 21:11:01,139 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 21:11:01,139 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39709,1690146660069 in 205 msec 2023-07-23 21:11:01,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 21:11:01,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 364 msec 2023-07-23 21:11:01,142 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 424 msec 2023-07-23 21:11:01,142 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690146661142, completionTime=-1 2023-07-23 21:11:01,142 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 21:11:01,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 21:11:01,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 21:11:01,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690146721147 2023-07-23 21:11:01,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690146781147 2023-07-23 21:11:01,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-23 21:11:01,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44239,1690146659892-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:01,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44239,1690146659892-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:01,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44239,1690146659892-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:01,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44239, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:01,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:01,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 21:11:01,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 21:11:01,154 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 21:11:01,155 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 21:11:01,155 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:11:01,156 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:11:01,157 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:01,157 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273 empty. 2023-07-23 21:11:01,158 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:01,158 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 21:11:01,169 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 21:11:01,170 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c2ddf9d01663accd46fe1970916cf273, NAME => 'hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp 2023-07-23 21:11:01,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:01,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c2ddf9d01663accd46fe1970916cf273, disabling compactions & flushes 2023-07-23 21:11:01,178 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:01,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:01,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. after waiting 0 ms 2023-07-23 21:11:01,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:01,178 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:01,178 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c2ddf9d01663accd46fe1970916cf273: 2023-07-23 21:11:01,180 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:11:01,181 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146661181"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146661181"}]},"ts":"1690146661181"} 2023-07-23 21:11:01,183 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:11:01,184 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:11:01,184 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146661184"}]},"ts":"1690146661184"} 2023-07-23 21:11:01,185 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 21:11:01,189 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:01,190 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:01,190 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:01,190 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:01,190 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:11:01,190 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c2ddf9d01663accd46fe1970916cf273, ASSIGN}] 2023-07-23 21:11:01,192 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c2ddf9d01663accd46fe1970916cf273, ASSIGN 2023-07-23 21:11:01,192 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c2ddf9d01663accd46fe1970916cf273, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46219,1690146660226; forceNewPlan=false, retain=false 2023-07-23 21:11:01,330 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44239,1690146659892] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:11:01,332 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44239,1690146659892] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 21:11:01,333 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:11:01,334 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:11:01,335 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:01,336 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2 empty. 2023-07-23 21:11:01,336 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:01,336 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 21:11:01,343 INFO [jenkins-hbase4:44239] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:11:01,343 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c2ddf9d01663accd46fe1970916cf273, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:01,344 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146661343"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146661343"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146661343"}]},"ts":"1690146661343"} 2023-07-23 21:11:01,345 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure c2ddf9d01663accd46fe1970916cf273, server=jenkins-hbase4.apache.org,46219,1690146660226}] 2023-07-23 21:11:01,352 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 21:11:01,354 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e4eed06501db5f2e2c9115c69aeb57e2, NAME => 'hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp 2023-07-23 21:11:01,364 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:01,364 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e4eed06501db5f2e2c9115c69aeb57e2, disabling compactions & flushes 2023-07-23 21:11:01,364 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:01,364 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:01,364 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. after waiting 0 ms 2023-07-23 21:11:01,364 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:01,364 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:01,364 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e4eed06501db5f2e2c9115c69aeb57e2: 2023-07-23 21:11:01,366 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:11:01,367 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146661367"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146661367"}]},"ts":"1690146661367"} 2023-07-23 21:11:01,368 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:11:01,369 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:11:01,369 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146661369"}]},"ts":"1690146661369"} 2023-07-23 21:11:01,370 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 21:11:01,373 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:01,373 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:01,373 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:01,373 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:01,373 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:11:01,373 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e4eed06501db5f2e2c9115c69aeb57e2, ASSIGN}] 2023-07-23 21:11:01,374 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e4eed06501db5f2e2c9115c69aeb57e2, ASSIGN 2023-07-23 21:11:01,374 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e4eed06501db5f2e2c9115c69aeb57e2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39709,1690146660069; forceNewPlan=false, retain=false 2023-07-23 21:11:01,497 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:01,498 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 21:11:01,499 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51402, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 21:11:01,506 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:01,507 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c2ddf9d01663accd46fe1970916cf273, NAME => 'hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:01,507 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:01,507 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:01,507 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:01,507 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:01,508 INFO [StoreOpener-c2ddf9d01663accd46fe1970916cf273-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:01,510 DEBUG [StoreOpener-c2ddf9d01663accd46fe1970916cf273-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273/info 2023-07-23 21:11:01,510 DEBUG [StoreOpener-c2ddf9d01663accd46fe1970916cf273-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273/info 2023-07-23 21:11:01,515 INFO [StoreOpener-c2ddf9d01663accd46fe1970916cf273-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c2ddf9d01663accd46fe1970916cf273 columnFamilyName info 2023-07-23 21:11:01,516 INFO [StoreOpener-c2ddf9d01663accd46fe1970916cf273-1] regionserver.HStore(310): Store=c2ddf9d01663accd46fe1970916cf273/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:01,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:01,518 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:01,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:01,525 INFO [jenkins-hbase4:44239] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:11:01,526 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e4eed06501db5f2e2c9115c69aeb57e2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:01,526 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146661526"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146661526"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146661526"}]},"ts":"1690146661526"} 2023-07-23 21:11:01,527 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:11:01,527 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure e4eed06501db5f2e2c9115c69aeb57e2, server=jenkins-hbase4.apache.org,39709,1690146660069}] 2023-07-23 21:11:01,528 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c2ddf9d01663accd46fe1970916cf273; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10377753920, jitterRate=-0.033496350049972534}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:01,528 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c2ddf9d01663accd46fe1970916cf273: 2023-07-23 21:11:01,529 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273., pid=7, masterSystemTime=1690146661497 2023-07-23 21:11:01,533 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:01,533 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:01,534 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c2ddf9d01663accd46fe1970916cf273, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:01,534 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690146661534"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146661534"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146661534"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146661534"}]},"ts":"1690146661534"} 2023-07-23 21:11:01,537 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-23 21:11:01,537 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure c2ddf9d01663accd46fe1970916cf273, server=jenkins-hbase4.apache.org,46219,1690146660226 in 190 msec 2023-07-23 21:11:01,538 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-23 21:11:01,538 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c2ddf9d01663accd46fe1970916cf273, ASSIGN in 347 msec 2023-07-23 21:11:01,539 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:11:01,539 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146661539"}]},"ts":"1690146661539"} 2023-07-23 21:11:01,540 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 21:11:01,543 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:11:01,544 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 390 msec 2023-07-23 21:11:01,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 21:11:01,557 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:11:01,557 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:01,560 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:01,562 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51412, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:01,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 21:11:01,572 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:11:01,575 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-23 21:11:01,586 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 21:11:01,589 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-23 21:11:01,589 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 21:11:01,683 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:01,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e4eed06501db5f2e2c9115c69aeb57e2, NAME => 'hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:01,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 21:11:01,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. service=MultiRowMutationService 2023-07-23 21:11:01,683 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 21:11:01,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:01,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:01,683 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:01,684 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:01,685 INFO [StoreOpener-e4eed06501db5f2e2c9115c69aeb57e2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:01,686 DEBUG [StoreOpener-e4eed06501db5f2e2c9115c69aeb57e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2/m 2023-07-23 21:11:01,686 DEBUG [StoreOpener-e4eed06501db5f2e2c9115c69aeb57e2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2/m 2023-07-23 21:11:01,686 INFO [StoreOpener-e4eed06501db5f2e2c9115c69aeb57e2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e4eed06501db5f2e2c9115c69aeb57e2 columnFamilyName m 2023-07-23 21:11:01,687 INFO [StoreOpener-e4eed06501db5f2e2c9115c69aeb57e2-1] regionserver.HStore(310): Store=e4eed06501db5f2e2c9115c69aeb57e2/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:01,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:01,688 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:01,691 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:01,692 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:11:01,693 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e4eed06501db5f2e2c9115c69aeb57e2; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@e690955, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:01,693 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e4eed06501db5f2e2c9115c69aeb57e2: 2023-07-23 21:11:01,694 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2., pid=9, masterSystemTime=1690146661679 2023-07-23 21:11:01,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:01,695 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:01,695 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=e4eed06501db5f2e2c9115c69aeb57e2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:01,696 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690146661695"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146661695"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146661695"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146661695"}]},"ts":"1690146661695"} 2023-07-23 21:11:01,698 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-23 21:11:01,698 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure e4eed06501db5f2e2c9115c69aeb57e2, server=jenkins-hbase4.apache.org,39709,1690146660069 in 170 msec 2023-07-23 21:11:01,700 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-23 21:11:01,700 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e4eed06501db5f2e2c9115c69aeb57e2, ASSIGN in 325 msec 2023-07-23 21:11:01,705 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:11:01,710 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 123 msec 2023-07-23 21:11:01,710 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:11:01,711 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146661711"}]},"ts":"1690146661711"} 2023-07-23 21:11:01,712 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 21:11:01,714 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:11:01,716 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 384 msec 2023-07-23 21:11:01,721 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 21:11:01,724 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 21:11:01,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.132sec 2023-07-23 21:11:01,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-23 21:11:01,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 21:11:01,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 21:11:01,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44239,1690146659892-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 21:11:01,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44239,1690146659892-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 21:11:01,726 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 21:11:01,735 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 21:11:01,735 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 21:11:01,742 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:01,742 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:01,744 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:11:01,747 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 21:11:01,781 DEBUG [Listener at localhost/39849] zookeeper.ReadOnlyZKClient(139): Connect 0x69c9dac6 to 127.0.0.1:64936 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:01,788 DEBUG [Listener at localhost/39849] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47974172, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:01,791 DEBUG [hconnection-0x40742411-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:01,793 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49966, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:01,795 INFO [Listener at localhost/39849] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:01,795 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:01,797 DEBUG [Listener at localhost/39849] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 21:11:01,799 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50108, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 21:11:01,803 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 21:11:01,803 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:01,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 21:11:01,804 DEBUG [Listener at localhost/39849] zookeeper.ReadOnlyZKClient(139): Connect 0x00656282 to 127.0.0.1:64936 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:01,813 DEBUG [Listener at localhost/39849] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61508a33, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:01,813 INFO [Listener at localhost/39849] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:64936 2023-07-23 21:11:01,817 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:01,823 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019405e121000a connected 2023-07-23 21:11:01,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:01,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:01,831 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 21:11:01,842 INFO [Listener at localhost/39849] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 21:11:01,842 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:01,842 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:01,843 INFO [Listener at localhost/39849] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 21:11:01,843 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 21:11:01,843 INFO [Listener at localhost/39849] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 21:11:01,843 INFO [Listener at localhost/39849] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 21:11:01,846 INFO [Listener at localhost/39849] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35155 2023-07-23 21:11:01,846 INFO [Listener at localhost/39849] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 21:11:01,849 DEBUG [Listener at localhost/39849] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 21:11:01,849 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:01,850 INFO [Listener at localhost/39849] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 21:11:01,851 INFO [Listener at localhost/39849] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35155 connecting to ZooKeeper ensemble=127.0.0.1:64936 2023-07-23 21:11:01,855 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:351550x0, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 21:11:01,857 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35155-0x1019405e121000b connected 2023-07-23 21:11:01,857 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(162): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 21:11:01,858 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(162): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 21:11:01,859 DEBUG [Listener at localhost/39849] zookeeper.ZKUtil(164): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 21:11:01,859 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35155 2023-07-23 21:11:01,861 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35155 2023-07-23 21:11:01,862 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35155 2023-07-23 21:11:01,864 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35155 2023-07-23 21:11:01,866 DEBUG [Listener at localhost/39849] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35155 2023-07-23 21:11:01,868 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 21:11:01,868 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 21:11:01,868 INFO [Listener at localhost/39849] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 21:11:01,869 INFO [Listener at localhost/39849] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 21:11:01,869 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 21:11:01,869 INFO [Listener at localhost/39849] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 21:11:01,869 INFO [Listener at localhost/39849] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 21:11:01,869 INFO [Listener at localhost/39849] http.HttpServer(1146): Jetty bound to port 35467 2023-07-23 21:11:01,869 INFO [Listener at localhost/39849] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 21:11:01,871 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:01,871 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e4d2453{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,AVAILABLE} 2023-07-23 21:11:01,871 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:01,871 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3b6fd3cd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 21:11:01,996 INFO [Listener at localhost/39849] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 21:11:01,997 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 21:11:01,997 INFO [Listener at localhost/39849] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 21:11:01,997 INFO [Listener at localhost/39849] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 21:11:01,998 INFO [Listener at localhost/39849] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 21:11:01,999 INFO [Listener at localhost/39849] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4330e5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/java.io.tmpdir/jetty-0_0_0_0-35467-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1650945583392545630/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:02,000 INFO [Listener at localhost/39849] server.AbstractConnector(333): Started ServerConnector@7cd3c1d7{HTTP/1.1, (http/1.1)}{0.0.0.0:35467} 2023-07-23 21:11:02,001 INFO [Listener at localhost/39849] server.Server(415): Started @40435ms 2023-07-23 21:11:02,003 INFO [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(951): ClusterId : 3155c502-c356-484b-8600-1021b61e1046 2023-07-23 21:11:02,003 DEBUG [RS:3;jenkins-hbase4:35155] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 21:11:02,005 DEBUG [RS:3;jenkins-hbase4:35155] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 21:11:02,005 DEBUG [RS:3;jenkins-hbase4:35155] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 21:11:02,007 DEBUG [RS:3;jenkins-hbase4:35155] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 21:11:02,010 DEBUG [RS:3;jenkins-hbase4:35155] zookeeper.ReadOnlyZKClient(139): Connect 0x087e9aea to 127.0.0.1:64936 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 21:11:02,016 DEBUG [RS:3;jenkins-hbase4:35155] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b96595, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 21:11:02,016 DEBUG [RS:3;jenkins-hbase4:35155] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@286f19a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:02,024 DEBUG [RS:3;jenkins-hbase4:35155] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:35155 2023-07-23 21:11:02,024 INFO [RS:3;jenkins-hbase4:35155] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 21:11:02,024 INFO [RS:3;jenkins-hbase4:35155] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 21:11:02,024 DEBUG [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 21:11:02,025 INFO [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44239,1690146659892 with isa=jenkins-hbase4.apache.org/172.31.14.131:35155, startcode=1690146661842 2023-07-23 21:11:02,025 DEBUG [RS:3;jenkins-hbase4:35155] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 21:11:02,027 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58341, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 21:11:02,027 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44239] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:02,027 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 21:11:02,028 DEBUG [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2 2023-07-23 21:11:02,028 DEBUG [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34653 2023-07-23 21:11:02,028 DEBUG [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=45883 2023-07-23 21:11:02,032 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:02,032 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:02,032 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:02,032 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:02,032 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:02,032 DEBUG [RS:3;jenkins-hbase4:35155] zookeeper.ZKUtil(162): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:02,032 WARN [RS:3;jenkins-hbase4:35155] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 21:11:02,032 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35155,1690146661842] 2023-07-23 21:11:02,032 INFO [RS:3;jenkins-hbase4:35155] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 21:11:02,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:02,033 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 21:11:02,033 DEBUG [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:02,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:02,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:02,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:02,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:02,034 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:02,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:02,035 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 21:11:02,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:02,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:02,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:02,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:02,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:02,039 DEBUG [RS:3;jenkins-hbase4:35155] zookeeper.ZKUtil(162): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:02,039 DEBUG [RS:3;jenkins-hbase4:35155] zookeeper.ZKUtil(162): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:02,039 DEBUG [RS:3;jenkins-hbase4:35155] zookeeper.ZKUtil(162): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:02,040 DEBUG [RS:3;jenkins-hbase4:35155] zookeeper.ZKUtil(162): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:02,040 DEBUG [RS:3;jenkins-hbase4:35155] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 21:11:02,040 INFO [RS:3;jenkins-hbase4:35155] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 21:11:02,041 INFO [RS:3;jenkins-hbase4:35155] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 21:11:02,042 INFO [RS:3;jenkins-hbase4:35155] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 21:11:02,042 INFO [RS:3;jenkins-hbase4:35155] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,042 INFO [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 21:11:02,043 INFO [RS:3;jenkins-hbase4:35155] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,043 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,043 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,043 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,043 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,043 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,043 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 21:11:02,043 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,043 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,044 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,044 DEBUG [RS:3;jenkins-hbase4:35155] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 21:11:02,047 INFO [RS:3;jenkins-hbase4:35155] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,047 INFO [RS:3;jenkins-hbase4:35155] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,047 INFO [RS:3;jenkins-hbase4:35155] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,058 INFO [RS:3;jenkins-hbase4:35155] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 21:11:02,058 INFO [RS:3;jenkins-hbase4:35155] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35155,1690146661842-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 21:11:02,068 INFO [RS:3;jenkins-hbase4:35155] regionserver.Replication(203): jenkins-hbase4.apache.org,35155,1690146661842 started 2023-07-23 21:11:02,068 INFO [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35155,1690146661842, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35155, sessionid=0x1019405e121000b 2023-07-23 21:11:02,069 DEBUG [RS:3;jenkins-hbase4:35155] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 21:11:02,069 DEBUG [RS:3;jenkins-hbase4:35155] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:02,069 DEBUG [RS:3;jenkins-hbase4:35155] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35155,1690146661842' 2023-07-23 21:11:02,069 DEBUG [RS:3;jenkins-hbase4:35155] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 21:11:02,069 DEBUG [RS:3;jenkins-hbase4:35155] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 21:11:02,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:02,070 DEBUG [RS:3;jenkins-hbase4:35155] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 21:11:02,070 DEBUG [RS:3;jenkins-hbase4:35155] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 21:11:02,070 DEBUG [RS:3;jenkins-hbase4:35155] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:02,070 DEBUG [RS:3;jenkins-hbase4:35155] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35155,1690146661842' 2023-07-23 21:11:02,070 DEBUG [RS:3;jenkins-hbase4:35155] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 21:11:02,070 DEBUG [RS:3;jenkins-hbase4:35155] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 21:11:02,070 DEBUG [RS:3;jenkins-hbase4:35155] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 21:11:02,070 INFO [RS:3;jenkins-hbase4:35155] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 21:11:02,070 INFO [RS:3;jenkins-hbase4:35155] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 21:11:02,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:02,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:02,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:02,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:02,078 DEBUG [hconnection-0x5da743fd-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 21:11:02,081 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49972, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 21:11:02,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:02,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:02,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44239] to rsgroup master 2023-07-23 21:11:02,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:02,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:50108 deadline: 1690147862088, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. 2023-07-23 21:11:02,089 WARN [Listener at localhost/39849] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:02,090 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:02,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:02,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:02,092 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35155, jenkins-hbase4.apache.org:39709, jenkins-hbase4.apache.org:46219, jenkins-hbase4.apache.org:46839], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:02,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:02,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:02,152 INFO [Listener at localhost/39849] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=561 (was 502) Potentially hanging thread: M:0;jenkins-hbase4:44239 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data1/current/BP-110239075-172.31.14.131-1690146659155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_966959954_17 at /127.0.0.1:54746 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1399523964-2513 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:39917 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp103469184-2236 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346738786-2249-acceptor-0@4626f00f-ServerConnector@1a6d3c87{HTTP/1.1, (http/1.1)}{0.0.0.0:42635} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp728990259-2174 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1399523964-2519 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-545-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp403767851-2144-acceptor-0@1c32a4e6-ServerConnector@14cf9bff{HTTP/1.1, (http/1.1)}{0.0.0.0:45883} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x4fd90d42-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins@localhost:39917 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1146293428-2205-acceptor-0@11e58fb8-ServerConnector@846bf45{HTTP/1.1, (http/1.1)}{0.0.0.0:36979} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_966959954_17 at /127.0.0.1:48398 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-5b92302e-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346738786-2245 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:34653 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d902e53-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@27dd0981 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1510508138_17 at /127.0.0.1:45922 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:35155Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 41269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-7f582bf1-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp403767851-2149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:34653 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server idle connection scanner for port 39849 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:3;jenkins-hbase4:35155 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 434711884@qtp-350742907-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45753 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x00656282-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1146293428-2206 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1399523964-2515 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x785ea930-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-550-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-110239075-172.31.14.131-1690146659155 heartbeating to localhost/127.0.0.1:34653 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-559-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2-prefix:jenkins-hbase4.apache.org,46839,1690146660436 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x0ab080cb-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/39849-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:39709-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 39849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1399523964-2517 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 2 on default port 39621 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 1878183691@qtp-867476200-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40349 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@15b382eb java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 41269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData-prefix:jenkins-hbase4.apache.org,44239,1690146659892 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data6/current/BP-110239075-172.31.14.131-1690146659155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:46839-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 39849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_966959954_17 at /127.0.0.1:45972 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37565,1690146654356 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2-prefix:jenkins-hbase4.apache.org,39709,1690146660069 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:39709 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 39621 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp346738786-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1510508138_17 at /127.0.0.1:54696 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: jenkins-hbase4:46219Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:39917 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:64936 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@54d77e3c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x00656282-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x1d902e53-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1146293428-2210 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x00656282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1360659748.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d902e53-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:34653 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp728990259-2179 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2-prefix:jenkins-hbase4.apache.org,46219,1690146660226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:64936): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp1399523964-2518 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 973366112@qtp-685228737-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data5/current/BP-110239075-172.31.14.131-1690146659155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x40742411-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp346738786-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x087e9aea-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/39849-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x4ecbda38-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:1;jenkins-hbase4:46219-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@79432452 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@34b779b7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44181-SendThread(127.0.0.1:50825) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1510508138_17 at /127.0.0.1:48332 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp103469184-2238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-536-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x785ea930-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_966959954_17 at /127.0.0.1:48390 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-539-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1146293428-2209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:39917 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2-prefix:jenkins-hbase4.apache.org,39709,1690146660069.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44239,1690146659892 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp103469184-2235-acceptor-0@4bb87f7f-ServerConnector@7dc44ac0{HTTP/1.1, (http/1.1)}{0.0.0.0:35207} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d902e53-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1888016708@qtp-1893545608-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46693 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_966959954_17 at /127.0.0.1:45898 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data4/current/BP-110239075-172.31.14.131-1690146659155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 439511011@qtp-867476200-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x785ea930 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1360659748.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data3/current/BP-110239075-172.31.14.131-1690146659155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@5200be java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50825@0x191c9b36-SendThread(127.0.0.1:50825) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-534-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp403767851-2147 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:35155-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:39917 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x4ecbda38 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1360659748.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp728990259-2175-acceptor-0@46a9f44c-ServerConnector@643c73c2{HTTP/1.1, (http/1.1)}{0.0.0.0:36751} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5da743fd-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:34653 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x0ab080cb sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1360659748.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1399523964-2520 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50825@0x191c9b36 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1360659748.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp728990259-2177 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3ad08ee9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@160bb555 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_453968176_17 at /127.0.0.1:48378 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x087e9aea sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1360659748.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@36832bb5 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 41269 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/39849-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/39849-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:34653 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d902e53-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:39917 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp103469184-2237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_966959954_17 at /127.0.0.1:45982 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp728990259-2178 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46839Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_453968176_17 at /127.0.0.1:54744 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x4fd90d42 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1360659748.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@1fce7680[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:39709Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x69c9dac6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1760408202@qtp-685228737-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35167 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: hconnection-0x1d902e53-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:34653 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x1d902e53-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a4c22eb java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@ed2996e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@1f459b6f sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346738786-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x0ab080cb-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:1;jenkins-hbase4:46219 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346738786-2247 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp403767851-2146 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data2/current/BP-110239075-172.31.14.131-1690146659155 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-21d6791-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1146293428-2207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346738786-2248 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 39621 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5da743fd-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_966959954_17 at /127.0.0.1:54750 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-110239075-172.31.14.131-1690146659155 heartbeating to localhost/127.0.0.1:34653 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x4ecbda38-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-555-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-110239075-172.31.14.131-1690146659155 heartbeating to localhost/127.0.0.1:34653 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2af07079-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp103469184-2241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp728990259-2180 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 1720180103@qtp-350742907-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 0 on default port 41269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50825@0x191c9b36-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x087e9aea-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@51e1886b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146660734 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 4 on default port 39849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1449156024_17 at /127.0.0.1:54738 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp728990259-2176 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 39849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 1 on default port 39621 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1399523964-2514-acceptor-0@7af897cc-ServerConnector@7cd3c1d7{HTTP/1.1, (http/1.1)}{0.0.0.0:35467} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x4fd90d42-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_453968176_17 at /127.0.0.1:48442 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 0 on default port 39849 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp103469184-2240 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146660734 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: IPC Server idle connection scanner for port 34653 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:34653 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp403767851-2148 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44181-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 493186578@qtp-1893545608-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 3 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1146293428-2204 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 34653 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1449156024_17 at /127.0.0.1:45948 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x69c9dac6-SendThread(127.0.0.1:64936) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp403767851-2143 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1765738165) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_966959954_17 at /127.0.0.1:54654 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1449156024_17 at /127.0.0.1:48368 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:34653 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins@localhost:34653 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@24090403 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:39917 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:39917 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Session-HouseKeeper-5ab42e66-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64936@0x69c9dac6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1360659748.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35155 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1146293428-2208 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_453968176_17 at /127.0.0.1:45956 [Receiving block BP-110239075-172.31.14.131-1690146659155:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 39621 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp728990259-2181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46839 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp403767851-2145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:46839 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:34653 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1d902e53-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1850812016) connection to localhost/127.0.0.1:39917 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 4 on default port 41269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp103469184-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/39849 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-110239075-172.31.14.131-1690146659155:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp103469184-2234 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp346738786-2246 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1894846720.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1146293428-2211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 41269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1399523964-2516 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp403767851-2150 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 39621 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@29bcd69a[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=846 (was 766) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=466 (was 479), ProcessCount=173 (was 173), AvailableMemoryMB=7686 (was 8082) 2023-07-23 21:11:02,155 WARN [Listener at localhost/39849] hbase.ResourceChecker(130): Thread=561 is superior to 500 2023-07-23 21:11:02,172 INFO [RS:3;jenkins-hbase4:35155] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35155%2C1690146661842, suffix=, logDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,35155,1690146661842, archiveDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs, maxLogs=32 2023-07-23 21:11:02,173 INFO [Listener at localhost/39849] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=561, OpenFileDescriptor=846, MaxFileDescriptor=60000, SystemLoadAverage=466, ProcessCount=173, AvailableMemoryMB=7684 2023-07-23 21:11:02,173 WARN [Listener at localhost/39849] hbase.ResourceChecker(130): Thread=561 is superior to 500 2023-07-23 21:11:02,173 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-23 21:11:02,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:02,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:02,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:02,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:02,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:02,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:02,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:02,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:02,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:02,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:02,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:02,193 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK] 2023-07-23 21:11:02,193 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK] 2023-07-23 21:11:02,194 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK] 2023-07-23 21:11:02,196 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:11:02,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:02,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:02,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:02,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:02,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:02,203 INFO [RS:3;jenkins-hbase4:35155] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/WALs/jenkins-hbase4.apache.org,35155,1690146661842/jenkins-hbase4.apache.org%2C35155%2C1690146661842.1690146662173 2023-07-23 21:11:02,208 DEBUG [RS:3;jenkins-hbase4:35155] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42289,DS-408a2d93-6d74-4d7c-beb3-e79864d94a2c,DISK], DatanodeInfoWithStorage[127.0.0.1:39311,DS-de977aed-e6e5-4f60-b31a-4d80af4540ac,DISK], DatanodeInfoWithStorage[127.0.0.1:46529,DS-f0ae744c-d946-443b-97b7-0f61a7bd2e3b,DISK]] 2023-07-23 21:11:02,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:02,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:02,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44239] to rsgroup master 2023-07-23 21:11:02,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:02,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:50108 deadline: 1690147862210, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. 2023-07-23 21:11:02,211 WARN [Listener at localhost/39849] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:02,213 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:02,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:02,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:02,214 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35155, jenkins-hbase4.apache.org:39709, jenkins-hbase4.apache.org:46219, jenkins-hbase4.apache.org:46839], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:02,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:02,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:02,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:11:02,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-23 21:11:02,218 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:11:02,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-23 21:11:02,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 21:11:02,220 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:02,220 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:02,221 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:02,222 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 21:11:02,224 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:02,224 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09 empty. 2023-07-23 21:11:02,225 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:02,225 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-23 21:11:02,245 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-23 21:11:02,246 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => bbfe1c4a941d3ee7c3635d4753ba2d09, NAME => 't1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp 2023-07-23 21:11:02,266 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:02,266 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing bbfe1c4a941d3ee7c3635d4753ba2d09, disabling compactions & flushes 2023-07-23 21:11:02,266 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:02,266 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:02,266 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. after waiting 0 ms 2023-07-23 21:11:02,266 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:02,266 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:02,266 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for bbfe1c4a941d3ee7c3635d4753ba2d09: 2023-07-23 21:11:02,268 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 21:11:02,269 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146662269"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146662269"}]},"ts":"1690146662269"} 2023-07-23 21:11:02,270 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 21:11:02,271 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 21:11:02,271 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146662271"}]},"ts":"1690146662271"} 2023-07-23 21:11:02,272 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-23 21:11:02,280 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 21:11:02,280 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 21:11:02,280 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 21:11:02,280 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 21:11:02,280 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 21:11:02,280 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 21:11:02,280 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=bbfe1c4a941d3ee7c3635d4753ba2d09, ASSIGN}] 2023-07-23 21:11:02,281 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=bbfe1c4a941d3ee7c3635d4753ba2d09, ASSIGN 2023-07-23 21:11:02,283 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=bbfe1c4a941d3ee7c3635d4753ba2d09, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46219,1690146660226; forceNewPlan=false, retain=false 2023-07-23 21:11:02,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 21:11:02,433 INFO [jenkins-hbase4:44239] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 21:11:02,435 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=bbfe1c4a941d3ee7c3635d4753ba2d09, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:02,435 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146662434"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146662434"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146662434"}]},"ts":"1690146662434"} 2023-07-23 21:11:02,436 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure bbfe1c4a941d3ee7c3635d4753ba2d09, server=jenkins-hbase4.apache.org,46219,1690146660226}] 2023-07-23 21:11:02,491 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-23 21:11:02,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 21:11:02,598 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:02,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bbfe1c4a941d3ee7c3635d4753ba2d09, NAME => 't1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.', STARTKEY => '', ENDKEY => ''} 2023-07-23 21:11:02,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:02,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 21:11:02,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:02,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:02,601 INFO [StoreOpener-bbfe1c4a941d3ee7c3635d4753ba2d09-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:02,604 DEBUG [StoreOpener-bbfe1c4a941d3ee7c3635d4753ba2d09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09/cf1 2023-07-23 21:11:02,604 DEBUG [StoreOpener-bbfe1c4a941d3ee7c3635d4753ba2d09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09/cf1 2023-07-23 21:11:02,606 INFO [StoreOpener-bbfe1c4a941d3ee7c3635d4753ba2d09-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bbfe1c4a941d3ee7c3635d4753ba2d09 columnFamilyName cf1 2023-07-23 21:11:02,615 INFO [StoreOpener-bbfe1c4a941d3ee7c3635d4753ba2d09-1] regionserver.HStore(310): Store=bbfe1c4a941d3ee7c3635d4753ba2d09/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 21:11:02,616 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:02,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:02,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:02,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 21:11:02,631 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bbfe1c4a941d3ee7c3635d4753ba2d09; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9736435680, jitterRate=-0.09322376549243927}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 21:11:02,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bbfe1c4a941d3ee7c3635d4753ba2d09: 2023-07-23 21:11:02,631 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09., pid=14, masterSystemTime=1690146662590 2023-07-23 21:11:02,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:02,633 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:02,634 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=bbfe1c4a941d3ee7c3635d4753ba2d09, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:02,634 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146662633"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690146662633"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690146662633"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690146662633"}]},"ts":"1690146662633"} 2023-07-23 21:11:02,637 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-23 21:11:02,637 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure bbfe1c4a941d3ee7c3635d4753ba2d09, server=jenkins-hbase4.apache.org,46219,1690146660226 in 199 msec 2023-07-23 21:11:02,639 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-23 21:11:02,639 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=bbfe1c4a941d3ee7c3635d4753ba2d09, ASSIGN in 357 msec 2023-07-23 21:11:02,639 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 21:11:02,639 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146662639"}]},"ts":"1690146662639"} 2023-07-23 21:11:02,640 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-23 21:11:02,643 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 21:11:02,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 427 msec 2023-07-23 21:11:02,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 21:11:02,824 INFO [Listener at localhost/39849] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-23 21:11:02,824 DEBUG [Listener at localhost/39849] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-23 21:11:02,824 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:02,826 INFO [Listener at localhost/39849] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-23 21:11:02,827 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:02,827 INFO [Listener at localhost/39849] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-23 21:11:02,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 21:11:02,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-23 21:11:02,832 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 21:11:02,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-23 21:11:02,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:50108 deadline: 1690146722828, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-23 21:11:02,834 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:02,837 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=7 msec 2023-07-23 21:11:02,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:02,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:02,936 INFO [Listener at localhost/39849] client.HBaseAdmin$15(890): Started disable of t1 2023-07-23 21:11:02,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-23 21:11:02,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-23 21:11:02,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 21:11:02,940 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146662940"}]},"ts":"1690146662940"} 2023-07-23 21:11:02,941 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-23 21:11:02,943 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-23 21:11:02,944 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=bbfe1c4a941d3ee7c3635d4753ba2d09, UNASSIGN}] 2023-07-23 21:11:02,944 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=bbfe1c4a941d3ee7c3635d4753ba2d09, UNASSIGN 2023-07-23 21:11:02,945 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=bbfe1c4a941d3ee7c3635d4753ba2d09, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:02,945 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146662945"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690146662945"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690146662945"}]},"ts":"1690146662945"} 2023-07-23 21:11:02,946 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure bbfe1c4a941d3ee7c3635d4753ba2d09, server=jenkins-hbase4.apache.org,46219,1690146660226}] 2023-07-23 21:11:03,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 21:11:03,099 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:03,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bbfe1c4a941d3ee7c3635d4753ba2d09, disabling compactions & flushes 2023-07-23 21:11:03,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:03,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:03,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. after waiting 0 ms 2023-07-23 21:11:03,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:03,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 21:11:03,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09. 2023-07-23 21:11:03,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bbfe1c4a941d3ee7c3635d4753ba2d09: 2023-07-23 21:11:03,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:03,110 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=bbfe1c4a941d3ee7c3635d4753ba2d09, regionState=CLOSED 2023-07-23 21:11:03,110 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690146663110"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690146663110"}]},"ts":"1690146663110"} 2023-07-23 21:11:03,114 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-23 21:11:03,115 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure bbfe1c4a941d3ee7c3635d4753ba2d09, server=jenkins-hbase4.apache.org,46219,1690146660226 in 167 msec 2023-07-23 21:11:03,116 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-23 21:11:03,116 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=bbfe1c4a941d3ee7c3635d4753ba2d09, UNASSIGN in 171 msec 2023-07-23 21:11:03,120 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690146663120"}]},"ts":"1690146663120"} 2023-07-23 21:11:03,121 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-23 21:11:03,123 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-23 21:11:03,126 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 187 msec 2023-07-23 21:11:03,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 21:11:03,242 INFO [Listener at localhost/39849] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-23 21:11:03,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-23 21:11:03,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-23 21:11:03,245 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-23 21:11:03,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-23 21:11:03,246 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-23 21:11:03,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:03,250 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:03,251 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09/cf1, FileablePath, hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09/recovered.edits] 2023-07-23 21:11:03,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 21:11:03,256 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09/recovered.edits/4.seqid to hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/archive/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09/recovered.edits/4.seqid 2023-07-23 21:11:03,257 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/.tmp/data/default/t1/bbfe1c4a941d3ee7c3635d4753ba2d09 2023-07-23 21:11:03,257 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-23 21:11:03,259 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-23 21:11:03,260 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-23 21:11:03,261 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-23 21:11:03,262 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-23 21:11:03,262 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-23 21:11:03,262 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690146663262"}]},"ts":"9223372036854775807"} 2023-07-23 21:11:03,264 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 21:11:03,264 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => bbfe1c4a941d3ee7c3635d4753ba2d09, NAME => 't1,,1690146662216.bbfe1c4a941d3ee7c3635d4753ba2d09.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 21:11:03,264 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-23 21:11:03,264 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690146663264"}]},"ts":"9223372036854775807"} 2023-07-23 21:11:03,265 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-23 21:11:03,267 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-23 21:11:03,267 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 24 msec 2023-07-23 21:11:03,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 21:11:03,352 INFO [Listener at localhost/39849] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-23 21:11:03,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:03,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:03,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:03,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:03,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:03,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:03,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:03,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:03,367 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:11:03,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:03,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:03,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:03,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44239] to rsgroup master 2023-07-23 21:11:03,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:03,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:50108 deadline: 1690147863375, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. 2023-07-23 21:11:03,376 WARN [Listener at localhost/39849] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:03,379 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:03,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,380 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35155, jenkins-hbase4.apache.org:39709, jenkins-hbase4.apache.org:46219, jenkins-hbase4.apache.org:46839], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:03,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:03,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:03,397 INFO [Listener at localhost/39849] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=572 (was 561) - Thread LEAK? -, OpenFileDescriptor=849 (was 846) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 466) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=7659 (was 7684) 2023-07-23 21:11:03,398 WARN [Listener at localhost/39849] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-23 21:11:03,414 INFO [Listener at localhost/39849] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572, OpenFileDescriptor=849, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=173, AvailableMemoryMB=7658 2023-07-23 21:11:03,414 WARN [Listener at localhost/39849] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-23 21:11:03,415 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-23 21:11:03,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:03,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:03,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:03,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:03,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:03,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:03,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:03,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:03,428 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:11:03,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:03,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:03,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:03,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44239] to rsgroup master 2023-07-23 21:11:03,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:03,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50108 deadline: 1690147863438, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. 2023-07-23 21:11:03,439 WARN [Listener at localhost/39849] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:03,440 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:03,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,441 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35155, jenkins-hbase4.apache.org:39709, jenkins-hbase4.apache.org:46219, jenkins-hbase4.apache.org:46839], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:03,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:03,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:03,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-23 21:11:03,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:11:03,444 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-23 21:11:03,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-23 21:11:03,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 21:11:03,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:03,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:03,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:03,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:03,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:03,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:03,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:03,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:03,462 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:11:03,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:03,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:03,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:03,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44239] to rsgroup master 2023-07-23 21:11:03,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:03,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50108 deadline: 1690147863472, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. 2023-07-23 21:11:03,473 WARN [Listener at localhost/39849] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:03,474 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:03,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,475 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35155, jenkins-hbase4.apache.org:39709, jenkins-hbase4.apache.org:46219, jenkins-hbase4.apache.org:46839], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:03,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:03,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:03,494 INFO [Listener at localhost/39849] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=574 (was 572) - Thread LEAK? -, OpenFileDescriptor=849 (was 849), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 509), ProcessCount=173 (was 173), AvailableMemoryMB=7657 (was 7658) 2023-07-23 21:11:03,494 WARN [Listener at localhost/39849] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-23 21:11:03,510 INFO [Listener at localhost/39849] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=574, OpenFileDescriptor=849, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=173, AvailableMemoryMB=7657 2023-07-23 21:11:03,511 WARN [Listener at localhost/39849] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-23 21:11:03,511 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-23 21:11:03,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:03,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:03,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:03,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:03,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:03,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:03,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:03,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:03,524 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:11:03,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:03,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:03,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:03,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44239] to rsgroup master 2023-07-23 21:11:03,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:03,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50108 deadline: 1690147863534, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. 2023-07-23 21:11:03,535 WARN [Listener at localhost/39849] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:03,537 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:03,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,538 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35155, jenkins-hbase4.apache.org:39709, jenkins-hbase4.apache.org:46219, jenkins-hbase4.apache.org:46839], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:03,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:03,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:03,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:03,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:03,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:03,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:03,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:03,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:03,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:03,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:03,554 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:11:03,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:03,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:03,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:03,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44239] to rsgroup master 2023-07-23 21:11:03,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:03,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50108 deadline: 1690147863563, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. 2023-07-23 21:11:03,564 WARN [Listener at localhost/39849] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:03,566 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:03,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,567 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35155, jenkins-hbase4.apache.org:39709, jenkins-hbase4.apache.org:46219, jenkins-hbase4.apache.org:46839], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:03,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:03,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:03,587 INFO [Listener at localhost/39849] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575 (was 574) - Thread LEAK? -, OpenFileDescriptor=849 (was 849), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 509), ProcessCount=173 (was 173), AvailableMemoryMB=7656 (was 7657) 2023-07-23 21:11:03,587 WARN [Listener at localhost/39849] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-23 21:11:03,607 INFO [Listener at localhost/39849] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=575, OpenFileDescriptor=849, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=173, AvailableMemoryMB=7655 2023-07-23 21:11:03,607 WARN [Listener at localhost/39849] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-23 21:11:03,607 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-23 21:11:03,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:03,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:03,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:03,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:03,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:03,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:03,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:03,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:03,621 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:11:03,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:03,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:03,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:03,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44239] to rsgroup master 2023-07-23 21:11:03,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:03,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50108 deadline: 1690147863630, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. 2023-07-23 21:11:03,631 WARN [Listener at localhost/39849] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:03,635 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:03,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,636 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35155, jenkins-hbase4.apache.org:39709, jenkins-hbase4.apache.org:46219, jenkins-hbase4.apache.org:46839], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:03,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:03,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:03,637 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-23 21:11:03,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-23 21:11:03,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-23 21:11:03,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 21:11:03,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:03,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-23 21:11:03,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:03,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 21:11:03,658 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:11:03,660 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-23 21:11:03,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 21:11:03,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-23 21:11:03,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:03,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:50108 deadline: 1690147863755, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-23 21:11:03,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-23 21:11:03,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:03,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-23 21:11:03,776 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-23 21:11:03,778 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-23 21:11:03,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-23 21:11:03,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-23 21:11:03,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-23 21:11:03,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-23 21:11:03,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 21:11:03,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:03,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:03,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:03,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-23 21:11:03,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:03,890 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:03,892 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:03,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-23 21:11:03,893 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:03,894 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-23 21:11:03,895 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 21:11:03,895 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:03,897 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 21:11:03,897 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-23 21:11:03,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-23 21:11:03,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-23 21:11:03,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-23 21:11:03,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:03,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:03,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-23 21:11:04,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:04,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:04,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:04,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:04,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:50108 deadline: 1690146724004, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-23 21:11:04,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:04,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:04,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:04,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:04,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:04,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:04,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:04,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-23 21:11:04,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:04,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:04,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 21:11:04,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:04,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 21:11:04,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 21:11:04,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 21:11:04,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 21:11:04,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 21:11:04,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 21:11:04,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:04,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 21:11:04,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 21:11:04,023 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 21:11:04,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 21:11:04,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 21:11:04,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 21:11:04,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 21:11:04,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 21:11:04,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:04,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:04,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44239] to rsgroup master 2023-07-23 21:11:04,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 21:11:04,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:50108 deadline: 1690147864032, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. 2023-07-23 21:11:04,033 WARN [Listener at localhost/39849] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44239 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 21:11:04,035 INFO [Listener at localhost/39849] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 21:11:04,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 21:11:04,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 21:11:04,036 INFO [Listener at localhost/39849] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35155, jenkins-hbase4.apache.org:39709, jenkins-hbase4.apache.org:46219, jenkins-hbase4.apache.org:46839], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 21:11:04,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 21:11:04,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44239] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 21:11:04,056 INFO [Listener at localhost/39849] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=575 (was 575), OpenFileDescriptor=849 (was 849), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 509), ProcessCount=173 (was 173), AvailableMemoryMB=7653 (was 7655) 2023-07-23 21:11:04,056 WARN [Listener at localhost/39849] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-23 21:11:04,056 INFO [Listener at localhost/39849] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 21:11:04,056 INFO [Listener at localhost/39849] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 21:11:04,056 DEBUG [Listener at localhost/39849] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x69c9dac6 to 127.0.0.1:64936 2023-07-23 21:11:04,056 DEBUG [Listener at localhost/39849] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,056 DEBUG [Listener at localhost/39849] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 21:11:04,056 DEBUG [Listener at localhost/39849] util.JVMClusterUtil(257): Found active master hash=181504693, stopped=false 2023-07-23 21:11:04,056 DEBUG [Listener at localhost/39849] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 21:11:04,056 DEBUG [Listener at localhost/39849] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 21:11:04,056 INFO [Listener at localhost/39849] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:04,059 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:04,059 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:04,059 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:04,059 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:04,059 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:04,059 INFO [Listener at localhost/39849] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 21:11:04,059 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 21:11:04,059 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:04,059 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:04,060 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:04,060 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:04,060 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 21:11:04,060 DEBUG [Listener at localhost/39849] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4ecbda38 to 127.0.0.1:64936 2023-07-23 21:11:04,060 DEBUG [Listener at localhost/39849] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,061 INFO [Listener at localhost/39849] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39709,1690146660069' ***** 2023-07-23 21:11:04,061 INFO [Listener at localhost/39849] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:04,061 INFO [Listener at localhost/39849] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46219,1690146660226' ***** 2023-07-23 21:11:04,061 INFO [Listener at localhost/39849] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:04,061 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:04,061 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:04,061 INFO [Listener at localhost/39849] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46839,1690146660436' ***** 2023-07-23 21:11:04,061 INFO [Listener at localhost/39849] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:04,063 INFO [Listener at localhost/39849] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35155,1690146661842' ***** 2023-07-23 21:11:04,063 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:04,062 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:04,062 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:04,063 INFO [Listener at localhost/39849] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 21:11:04,063 INFO [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:04,066 INFO [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:04,069 INFO [RS:1;jenkins-hbase4:46219] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1e542859{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:04,069 INFO [RS:0;jenkins-hbase4:39709] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@70a8bb75{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:04,070 INFO [RS:2;jenkins-hbase4:46839] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4128c998{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:04,070 INFO [RS:3;jenkins-hbase4:35155] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4330e5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 21:11:04,070 INFO [RS:1;jenkins-hbase4:46219] server.AbstractConnector(383): Stopped ServerConnector@846bf45{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:04,070 INFO [RS:0;jenkins-hbase4:39709] server.AbstractConnector(383): Stopped ServerConnector@643c73c2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:04,070 INFO [RS:3;jenkins-hbase4:35155] server.AbstractConnector(383): Stopped ServerConnector@7cd3c1d7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:04,070 INFO [RS:0;jenkins-hbase4:39709] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:04,070 INFO [RS:2;jenkins-hbase4:46839] server.AbstractConnector(383): Stopped ServerConnector@7dc44ac0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:04,070 INFO [RS:1;jenkins-hbase4:46219] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:04,071 INFO [RS:0;jenkins-hbase4:39709] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@63c30705{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:04,070 INFO [RS:2;jenkins-hbase4:46839] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:04,070 INFO [RS:3;jenkins-hbase4:35155] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:04,073 INFO [RS:0;jenkins-hbase4:39709] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@8c35413{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:04,073 INFO [RS:1;jenkins-hbase4:46219] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e12c56a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:04,074 INFO [RS:3;jenkins-hbase4:35155] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3b6fd3cd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:04,073 INFO [RS:2;jenkins-hbase4:46839] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c18362f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:04,076 INFO [RS:3;jenkins-hbase4:35155] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e4d2453{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:04,076 INFO [RS:0;jenkins-hbase4:39709] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:04,075 INFO [RS:1;jenkins-hbase4:46219] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1829ee92{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:04,077 INFO [RS:0;jenkins-hbase4:39709] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:04,077 INFO [RS:0;jenkins-hbase4:39709] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:04,077 INFO [RS:2;jenkins-hbase4:46839] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@26b218ed{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:04,077 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(3305): Received CLOSE for e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:04,078 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:04,078 DEBUG [RS:0;jenkins-hbase4:39709] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x785ea930 to 127.0.0.1:64936 2023-07-23 21:11:04,078 DEBUG [RS:0;jenkins-hbase4:39709] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,078 INFO [RS:0;jenkins-hbase4:39709] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:04,078 INFO [RS:0;jenkins-hbase4:39709] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:04,078 INFO [RS:0;jenkins-hbase4:39709] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:04,078 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 21:11:04,078 INFO [RS:1;jenkins-hbase4:46219] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:04,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e4eed06501db5f2e2c9115c69aeb57e2, disabling compactions & flushes 2023-07-23 21:11:04,078 INFO [RS:3;jenkins-hbase4:35155] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:04,078 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:04,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:04,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. after waiting 0 ms 2023-07-23 21:11:04,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:04,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e4eed06501db5f2e2c9115c69aeb57e2 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-23 21:11:04,079 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-23 21:11:04,079 DEBUG [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1478): Online Regions={e4eed06501db5f2e2c9115c69aeb57e2=hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2., 1588230740=hbase:meta,,1.1588230740} 2023-07-23 21:11:04,079 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:04,079 DEBUG [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1504): Waiting on 1588230740, e4eed06501db5f2e2c9115c69aeb57e2 2023-07-23 21:11:04,079 INFO [RS:1;jenkins-hbase4:46219] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:04,079 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 21:11:04,079 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:04,079 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 21:11:04,079 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 21:11:04,079 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 21:11:04,079 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 21:11:04,079 INFO [RS:2;jenkins-hbase4:46839] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 21:11:04,080 INFO [RS:2;jenkins-hbase4:46839] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:04,080 INFO [RS:2;jenkins-hbase4:46839] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:04,080 INFO [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:04,080 DEBUG [RS:2;jenkins-hbase4:46839] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0ab080cb to 127.0.0.1:64936 2023-07-23 21:11:04,080 DEBUG [RS:2;jenkins-hbase4:46839] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,080 INFO [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46839,1690146660436; all regions closed. 2023-07-23 21:11:04,079 INFO [RS:1;jenkins-hbase4:46219] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:04,080 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(3305): Received CLOSE for c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:04,080 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:04,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c2ddf9d01663accd46fe1970916cf273, disabling compactions & flushes 2023-07-23 21:11:04,080 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-23 21:11:04,080 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 21:11:04,079 INFO [RS:3;jenkins-hbase4:35155] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 21:11:04,081 INFO [RS:3;jenkins-hbase4:35155] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 21:11:04,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:04,080 DEBUG [RS:1;jenkins-hbase4:46219] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4fd90d42 to 127.0.0.1:64936 2023-07-23 21:11:04,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:04,081 DEBUG [RS:1;jenkins-hbase4:46219] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,081 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 21:11:04,081 DEBUG [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1478): Online Regions={c2ddf9d01663accd46fe1970916cf273=hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273.} 2023-07-23 21:11:04,081 DEBUG [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1504): Waiting on c2ddf9d01663accd46fe1970916cf273 2023-07-23 21:11:04,081 INFO [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:04,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. after waiting 0 ms 2023-07-23 21:11:04,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:04,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c2ddf9d01663accd46fe1970916cf273 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-23 21:11:04,081 DEBUG [RS:3;jenkins-hbase4:35155] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x087e9aea to 127.0.0.1:64936 2023-07-23 21:11:04,081 DEBUG [RS:3;jenkins-hbase4:35155] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,081 INFO [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35155,1690146661842; all regions closed. 2023-07-23 21:11:04,092 DEBUG [RS:2;jenkins-hbase4:46839] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs 2023-07-23 21:11:04,092 INFO [RS:2;jenkins-hbase4:46839] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46839%2C1690146660436:(num 1690146660986) 2023-07-23 21:11:04,092 DEBUG [RS:2;jenkins-hbase4:46839] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,092 INFO [RS:2;jenkins-hbase4:46839] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:04,093 DEBUG [RS:3;jenkins-hbase4:35155] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs 2023-07-23 21:11:04,093 INFO [RS:3;jenkins-hbase4:35155] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35155%2C1690146661842:(num 1690146662173) 2023-07-23 21:11:04,093 DEBUG [RS:3;jenkins-hbase4:35155] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,093 INFO [RS:3;jenkins-hbase4:35155] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:04,099 INFO [RS:2;jenkins-hbase4:46839] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:04,099 INFO [RS:2;jenkins-hbase4:46839] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:04,099 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:04,099 INFO [RS:2;jenkins-hbase4:46839] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:04,099 INFO [RS:2;jenkins-hbase4:46839] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:04,099 INFO [RS:3;jenkins-hbase4:35155] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:04,099 INFO [RS:3;jenkins-hbase4:35155] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:04,099 INFO [RS:3;jenkins-hbase4:35155] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:04,099 INFO [RS:3;jenkins-hbase4:35155] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:04,099 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:04,100 INFO [RS:3;jenkins-hbase4:35155] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35155 2023-07-23 21:11:04,100 INFO [RS:2;jenkins-hbase4:46839] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46839 2023-07-23 21:11:04,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2/.tmp/m/1003ce2829e7429eaee4d54fcdca8e91 2023-07-23 21:11:04,113 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273/.tmp/info/40d14c1f4c1644e7bb806c61d1d4be05 2023-07-23 21:11:04,115 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/.tmp/info/7396bf78be34488796f918d326ba4b2b 2023-07-23 21:11:04,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1003ce2829e7429eaee4d54fcdca8e91 2023-07-23 21:11:04,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2/.tmp/m/1003ce2829e7429eaee4d54fcdca8e91 as hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2/m/1003ce2829e7429eaee4d54fcdca8e91 2023-07-23 21:11:04,120 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 40d14c1f4c1644e7bb806c61d1d4be05 2023-07-23 21:11:04,121 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7396bf78be34488796f918d326ba4b2b 2023-07-23 21:11:04,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273/.tmp/info/40d14c1f4c1644e7bb806c61d1d4be05 as hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273/info/40d14c1f4c1644e7bb806c61d1d4be05 2023-07-23 21:11:04,125 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1003ce2829e7429eaee4d54fcdca8e91 2023-07-23 21:11:04,125 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2/m/1003ce2829e7429eaee4d54fcdca8e91, entries=12, sequenceid=29, filesize=5.4 K 2023-07-23 21:11:04,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for e4eed06501db5f2e2c9115c69aeb57e2 in 48ms, sequenceid=29, compaction requested=false 2023-07-23 21:11:04,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 40d14c1f4c1644e7bb806c61d1d4be05 2023-07-23 21:11:04,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273/info/40d14c1f4c1644e7bb806c61d1d4be05, entries=3, sequenceid=9, filesize=5.0 K 2023-07-23 21:11:04,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for c2ddf9d01663accd46fe1970916cf273 in 49ms, sequenceid=9, compaction requested=false 2023-07-23 21:11:04,137 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/rsgroup/e4eed06501db5f2e2c9115c69aeb57e2/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-23 21:11:04,138 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:04,139 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:04,139 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e4eed06501db5f2e2c9115c69aeb57e2: 2023-07-23 21:11:04,139 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690146661330.e4eed06501db5f2e2c9115c69aeb57e2. 2023-07-23 21:11:04,141 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/namespace/c2ddf9d01663accd46fe1970916cf273/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-23 21:11:04,141 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/.tmp/rep_barrier/4232673250eb49d69b172b9dac8fbced 2023-07-23 21:11:04,142 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:04,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c2ddf9d01663accd46fe1970916cf273: 2023-07-23 21:11:04,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690146661153.c2ddf9d01663accd46fe1970916cf273. 2023-07-23 21:11:04,148 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4232673250eb49d69b172b9dac8fbced 2023-07-23 21:11:04,149 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:04,159 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/.tmp/table/c797791b4b7a46bdb6a667036e0b9312 2023-07-23 21:11:04,162 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:04,165 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c797791b4b7a46bdb6a667036e0b9312 2023-07-23 21:11:04,166 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/.tmp/info/7396bf78be34488796f918d326ba4b2b as hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/info/7396bf78be34488796f918d326ba4b2b 2023-07-23 21:11:04,171 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7396bf78be34488796f918d326ba4b2b 2023-07-23 21:11:04,171 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/info/7396bf78be34488796f918d326ba4b2b, entries=22, sequenceid=26, filesize=7.3 K 2023-07-23 21:11:04,172 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/.tmp/rep_barrier/4232673250eb49d69b172b9dac8fbced as hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/rep_barrier/4232673250eb49d69b172b9dac8fbced 2023-07-23 21:11:04,177 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4232673250eb49d69b172b9dac8fbced 2023-07-23 21:11:04,177 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/rep_barrier/4232673250eb49d69b172b9dac8fbced, entries=1, sequenceid=26, filesize=4.9 K 2023-07-23 21:11:04,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/.tmp/table/c797791b4b7a46bdb6a667036e0b9312 as hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/table/c797791b4b7a46bdb6a667036e0b9312 2023-07-23 21:11:04,183 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c797791b4b7a46bdb6a667036e0b9312 2023-07-23 21:11:04,183 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/table/c797791b4b7a46bdb6a667036e0b9312, entries=6, sequenceid=26, filesize=5.1 K 2023-07-23 21:11:04,184 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 105ms, sequenceid=26, compaction requested=false 2023-07-23 21:11:04,187 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:04,187 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:04,188 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:04,189 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:04,189 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:04,189 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:04,187 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35155,1690146661842 2023-07-23 21:11:04,189 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:04,189 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:04,189 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:04,189 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:04,191 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:04,191 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46839,1690146660436 2023-07-23 21:11:04,194 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-23 21:11:04,195 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 21:11:04,195 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 21:11:04,195 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 21:11:04,195 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 21:11:04,279 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39709,1690146660069; all regions closed. 2023-07-23 21:11:04,281 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46219,1690146660226; all regions closed. 2023-07-23 21:11:04,286 DEBUG [RS:0;jenkins-hbase4:39709] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs 2023-07-23 21:11:04,286 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35155,1690146661842] 2023-07-23 21:11:04,286 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35155,1690146661842; numProcessing=1 2023-07-23 21:11:04,286 INFO [RS:0;jenkins-hbase4:39709] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39709%2C1690146660069.meta:.meta(num 1690146661095) 2023-07-23 21:11:04,288 DEBUG [RS:1;jenkins-hbase4:46219] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs 2023-07-23 21:11:04,288 INFO [RS:1;jenkins-hbase4:46219] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46219%2C1690146660226:(num 1690146660986) 2023-07-23 21:11:04,288 DEBUG [RS:1;jenkins-hbase4:46219] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,288 INFO [RS:1;jenkins-hbase4:46219] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:04,289 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35155,1690146661842 already deleted, retry=false 2023-07-23 21:11:04,289 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35155,1690146661842 expired; onlineServers=3 2023-07-23 21:11:04,289 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46839,1690146660436] 2023-07-23 21:11:04,290 INFO [RS:1;jenkins-hbase4:46219] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:04,290 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46839,1690146660436; numProcessing=2 2023-07-23 21:11:04,290 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:04,290 INFO [RS:1;jenkins-hbase4:46219] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 21:11:04,290 INFO [RS:1;jenkins-hbase4:46219] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 21:11:04,291 INFO [RS:1;jenkins-hbase4:46219] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 21:11:04,292 INFO [RS:1;jenkins-hbase4:46219] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46219 2023-07-23 21:11:04,293 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46839,1690146660436 already deleted, retry=false 2023-07-23 21:11:04,293 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46839,1690146660436 expired; onlineServers=2 2023-07-23 21:11:04,294 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:04,294 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46219,1690146660226 2023-07-23 21:11:04,294 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:04,294 DEBUG [RS:0;jenkins-hbase4:39709] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/oldWALs 2023-07-23 21:11:04,294 INFO [RS:0;jenkins-hbase4:39709] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39709%2C1690146660069:(num 1690146661000) 2023-07-23 21:11:04,294 DEBUG [RS:0;jenkins-hbase4:39709] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,294 INFO [RS:0;jenkins-hbase4:39709] regionserver.LeaseManager(133): Closed leases 2023-07-23 21:11:04,294 INFO [RS:0;jenkins-hbase4:39709] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 21:11:04,295 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:04,295 INFO [RS:0;jenkins-hbase4:39709] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39709 2023-07-23 21:11:04,296 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46219,1690146660226] 2023-07-23 21:11:04,296 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46219,1690146660226; numProcessing=3 2023-07-23 21:11:04,298 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46219,1690146660226 already deleted, retry=false 2023-07-23 21:11:04,298 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46219,1690146660226 expired; onlineServers=1 2023-07-23 21:11:04,299 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39709,1690146660069 2023-07-23 21:11:04,299 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 21:11:04,300 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39709,1690146660069] 2023-07-23 21:11:04,300 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39709,1690146660069; numProcessing=4 2023-07-23 21:11:04,302 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39709,1690146660069 already deleted, retry=false 2023-07-23 21:11:04,302 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39709,1690146660069 expired; onlineServers=0 2023-07-23 21:11:04,302 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44239,1690146659892' ***** 2023-07-23 21:11:04,302 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 21:11:04,302 DEBUG [M:0;jenkins-hbase4:44239] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@eee0f37, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 21:11:04,302 INFO [M:0;jenkins-hbase4:44239] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 21:11:04,304 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 21:11:04,304 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 21:11:04,305 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 21:11:04,305 INFO [M:0;jenkins-hbase4:44239] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@28f19164{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 21:11:04,305 INFO [M:0;jenkins-hbase4:44239] server.AbstractConnector(383): Stopped ServerConnector@14cf9bff{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:04,305 INFO [M:0;jenkins-hbase4:44239] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 21:11:04,306 INFO [M:0;jenkins-hbase4:44239] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4a8135f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 21:11:04,307 INFO [M:0;jenkins-hbase4:44239] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f61349d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/hadoop.log.dir/,STOPPED} 2023-07-23 21:11:04,307 INFO [M:0;jenkins-hbase4:44239] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44239,1690146659892 2023-07-23 21:11:04,307 INFO [M:0;jenkins-hbase4:44239] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44239,1690146659892; all regions closed. 2023-07-23 21:11:04,307 DEBUG [M:0;jenkins-hbase4:44239] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 21:11:04,307 INFO [M:0;jenkins-hbase4:44239] master.HMaster(1491): Stopping master jetty server 2023-07-23 21:11:04,308 INFO [M:0;jenkins-hbase4:44239] server.AbstractConnector(383): Stopped ServerConnector@1a6d3c87{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 21:11:04,308 DEBUG [M:0;jenkins-hbase4:44239] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 21:11:04,308 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 21:11:04,308 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146660734] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690146660734,5,FailOnTimeoutGroup] 2023-07-23 21:11:04,308 DEBUG [M:0;jenkins-hbase4:44239] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 21:11:04,308 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146660734] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690146660734,5,FailOnTimeoutGroup] 2023-07-23 21:11:04,308 INFO [M:0;jenkins-hbase4:44239] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 21:11:04,308 INFO [M:0;jenkins-hbase4:44239] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 21:11:04,308 INFO [M:0;jenkins-hbase4:44239] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-23 21:11:04,308 DEBUG [M:0;jenkins-hbase4:44239] master.HMaster(1512): Stopping service threads 2023-07-23 21:11:04,309 INFO [M:0;jenkins-hbase4:44239] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 21:11:04,309 ERROR [M:0;jenkins-hbase4:44239] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-23 21:11:04,309 INFO [M:0;jenkins-hbase4:44239] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 21:11:04,309 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 21:11:04,309 DEBUG [M:0;jenkins-hbase4:44239] zookeeper.ZKUtil(398): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 21:11:04,309 WARN [M:0;jenkins-hbase4:44239] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 21:11:04,309 INFO [M:0;jenkins-hbase4:44239] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 21:11:04,309 INFO [M:0;jenkins-hbase4:44239] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 21:11:04,309 DEBUG [M:0;jenkins-hbase4:44239] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 21:11:04,309 INFO [M:0;jenkins-hbase4:44239] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:04,309 DEBUG [M:0;jenkins-hbase4:44239] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:04,309 DEBUG [M:0;jenkins-hbase4:44239] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 21:11:04,310 DEBUG [M:0;jenkins-hbase4:44239] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:04,310 INFO [M:0;jenkins-hbase4:44239] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.18 KB heapSize=90.63 KB 2023-07-23 21:11:04,322 INFO [M:0;jenkins-hbase4:44239] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.18 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e037556712c44ccf803fc61a78b8e53e 2023-07-23 21:11:04,328 DEBUG [M:0;jenkins-hbase4:44239] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e037556712c44ccf803fc61a78b8e53e as hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e037556712c44ccf803fc61a78b8e53e 2023-07-23 21:11:04,333 INFO [M:0;jenkins-hbase4:44239] regionserver.HStore(1080): Added hdfs://localhost:34653/user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e037556712c44ccf803fc61a78b8e53e, entries=22, sequenceid=175, filesize=11.1 K 2023-07-23 21:11:04,334 INFO [M:0;jenkins-hbase4:44239] regionserver.HRegion(2948): Finished flush of dataSize ~76.18 KB/78010, heapSize ~90.62 KB/92792, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=175, compaction requested=false 2023-07-23 21:11:04,336 INFO [M:0;jenkins-hbase4:44239] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 21:11:04,336 DEBUG [M:0;jenkins-hbase4:44239] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 21:11:04,339 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/7ab59213-83d0-1435-d4d9-d240c97df8a2/MasterData/WALs/jenkins-hbase4.apache.org,44239,1690146659892/jenkins-hbase4.apache.org%2C44239%2C1690146659892.1690146660649 not finished, retry = 0 2023-07-23 21:11:04,440 INFO [M:0;jenkins-hbase4:44239] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 21:11:04,440 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 21:11:04,441 INFO [M:0;jenkins-hbase4:44239] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44239 2023-07-23 21:11:04,442 DEBUG [M:0;jenkins-hbase4:44239] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44239,1690146659892 already deleted, retry=false 2023-07-23 21:11:04,559 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,559 INFO [M:0;jenkins-hbase4:44239] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44239,1690146659892; zookeeper connection closed. 2023-07-23 21:11:04,560 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): master:44239-0x1019405e1210000, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,660 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,660 INFO [RS:0;jenkins-hbase4:39709] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39709,1690146660069; zookeeper connection closed. 2023-07-23 21:11:04,660 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:39709-0x1019405e1210001, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,660 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@10e90f1a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@10e90f1a 2023-07-23 21:11:04,760 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,760 INFO [RS:1;jenkins-hbase4:46219] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46219,1690146660226; zookeeper connection closed. 2023-07-23 21:11:04,760 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46219-0x1019405e1210002, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,760 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@34c5af62] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@34c5af62 2023-07-23 21:11:04,860 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,860 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:46839-0x1019405e1210003, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,860 INFO [RS:2;jenkins-hbase4:46839] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46839,1690146660436; zookeeper connection closed. 2023-07-23 21:11:04,861 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@730efb7b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@730efb7b 2023-07-23 21:11:04,960 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,961 INFO [RS:3;jenkins-hbase4:35155] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35155,1690146661842; zookeeper connection closed. 2023-07-23 21:11:04,961 DEBUG [Listener at localhost/39849-EventThread] zookeeper.ZKWatcher(600): regionserver:35155-0x1019405e121000b, quorum=127.0.0.1:64936, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 21:11:04,961 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@25245a6f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@25245a6f 2023-07-23 21:11:04,961 INFO [Listener at localhost/39849] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-23 21:11:04,961 WARN [Listener at localhost/39849] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:11:04,965 INFO [Listener at localhost/39849] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:11:05,068 WARN [BP-110239075-172.31.14.131-1690146659155 heartbeating to localhost/127.0.0.1:34653] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:11:05,068 WARN [BP-110239075-172.31.14.131-1690146659155 heartbeating to localhost/127.0.0.1:34653] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-110239075-172.31.14.131-1690146659155 (Datanode Uuid 70878726-e465-42c4-9ad1-f06692ccbfe7) service to localhost/127.0.0.1:34653 2023-07-23 21:11:05,068 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data5/current/BP-110239075-172.31.14.131-1690146659155] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:05,069 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data6/current/BP-110239075-172.31.14.131-1690146659155] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:05,070 WARN [Listener at localhost/39849] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:11:05,072 INFO [Listener at localhost/39849] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:11:05,177 WARN [BP-110239075-172.31.14.131-1690146659155 heartbeating to localhost/127.0.0.1:34653] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:11:05,177 WARN [BP-110239075-172.31.14.131-1690146659155 heartbeating to localhost/127.0.0.1:34653] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-110239075-172.31.14.131-1690146659155 (Datanode Uuid 7a82de0d-7528-46f5-b355-484990e6653c) service to localhost/127.0.0.1:34653 2023-07-23 21:11:05,177 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data3/current/BP-110239075-172.31.14.131-1690146659155] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:05,178 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data4/current/BP-110239075-172.31.14.131-1690146659155] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:05,179 WARN [Listener at localhost/39849] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 21:11:05,183 INFO [Listener at localhost/39849] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:11:05,285 WARN [BP-110239075-172.31.14.131-1690146659155 heartbeating to localhost/127.0.0.1:34653] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 21:11:05,285 WARN [BP-110239075-172.31.14.131-1690146659155 heartbeating to localhost/127.0.0.1:34653] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-110239075-172.31.14.131-1690146659155 (Datanode Uuid 9440972b-810e-430d-add5-ba8c71400e68) service to localhost/127.0.0.1:34653 2023-07-23 21:11:05,286 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data1/current/BP-110239075-172.31.14.131-1690146659155] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:05,286 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/26f583a2-6929-517b-4a0e-133ce6c97dfb/cluster_c7661e2d-94ca-dbad-fc57-b37a0e657b7d/dfs/data/data2/current/BP-110239075-172.31.14.131-1690146659155] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 21:11:05,298 INFO [Listener at localhost/39849] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 21:11:05,414 INFO [Listener at localhost/39849] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 21:11:05,438 INFO [Listener at localhost/39849] hbase.HBaseTestingUtility(1293): Minicluster is down