2023-07-24 18:10:32,591 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e 2023-07-24 18:10:32,614 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics timeout: 13 mins 2023-07-24 18:10:32,634 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 18:10:32,635 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c, deleteOnExit=true 2023-07-24 18:10:32,635 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 18:10:32,636 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/test.cache.data in system properties and HBase conf 2023-07-24 18:10:32,636 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 18:10:32,637 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir in system properties and HBase conf 2023-07-24 18:10:32,638 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 18:10:32,638 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 18:10:32,639 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 18:10:32,768 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-24 18:10:33,159 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 18:10:33,164 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 18:10:33,164 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 18:10:33,164 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 18:10:33,164 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 18:10:33,165 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 18:10:33,165 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 18:10:33,165 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 18:10:33,166 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 18:10:33,166 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 18:10:33,166 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/nfs.dump.dir in system properties and HBase conf 2023-07-24 18:10:33,166 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir in system properties and HBase conf 2023-07-24 18:10:33,167 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 18:10:33,167 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 18:10:33,167 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 18:10:33,700 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 18:10:33,705 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 18:10:34,020 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-24 18:10:34,229 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-24 18:10:34,248 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:34,293 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:34,333 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/Jetty_localhost_37219_hdfs____grc61s/webapp 2023-07-24 18:10:34,500 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37219 2023-07-24 18:10:34,514 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 18:10:34,514 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 18:10:34,995 WARN [Listener at localhost/44619] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:35,077 WARN [Listener at localhost/44619] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:10:35,097 WARN [Listener at localhost/44619] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:35,106 INFO [Listener at localhost/44619] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:35,112 INFO [Listener at localhost/44619] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/Jetty_localhost_39367_datanode____.njbcp7/webapp 2023-07-24 18:10:35,258 INFO [Listener at localhost/44619] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39367 2023-07-24 18:10:35,725 WARN [Listener at localhost/39947] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:35,739 WARN [Listener at localhost/39947] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:10:35,743 WARN [Listener at localhost/39947] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:35,745 INFO [Listener at localhost/39947] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:35,750 INFO [Listener at localhost/39947] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/Jetty_localhost_43721_datanode____.1aumyq/webapp 2023-07-24 18:10:35,851 INFO [Listener at localhost/39947] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43721 2023-07-24 18:10:35,862 WARN [Listener at localhost/35549] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:35,913 WARN [Listener at localhost/35549] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:10:35,916 WARN [Listener at localhost/35549] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:35,918 INFO [Listener at localhost/35549] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:35,924 INFO [Listener at localhost/35549] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/Jetty_localhost_43901_datanode____.5qut1a/webapp 2023-07-24 18:10:36,036 INFO [Listener at localhost/35549] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43901 2023-07-24 18:10:36,050 WARN [Listener at localhost/44627] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:36,272 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4267b95fec52155: Processing first storage report for DS-49b7a722-b289-44b5-88fc-5d2eedab311e from datanode 182d4e07-339c-40db-baf0-22f3a970020f 2023-07-24 18:10:36,275 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4267b95fec52155: from storage DS-49b7a722-b289-44b5-88fc-5d2eedab311e node DatanodeRegistration(127.0.0.1:41465, datanodeUuid=182d4e07-339c-40db-baf0-22f3a970020f, infoPort=34171, infoSecurePort=0, ipcPort=35549, storageInfo=lv=-57;cid=testClusterID;nsid=1977276289;c=1690222233780), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-24 18:10:36,275 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4267b95fec52155: Processing first storage report for DS-741bf201-89b5-46d2-9813-e43d48f3250f from datanode 182d4e07-339c-40db-baf0-22f3a970020f 2023-07-24 18:10:36,275 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4267b95fec52155: from storage DS-741bf201-89b5-46d2-9813-e43d48f3250f node DatanodeRegistration(127.0.0.1:41465, datanodeUuid=182d4e07-339c-40db-baf0-22f3a970020f, infoPort=34171, infoSecurePort=0, ipcPort=35549, storageInfo=lv=-57;cid=testClusterID;nsid=1977276289;c=1690222233780), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:36,276 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x28c2788337ee7686: Processing first storage report for DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef from datanode ba5cd861-1676-4644-93dc-b5fe8d8e848d 2023-07-24 18:10:36,276 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x28c2788337ee7686: from storage DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef node DatanodeRegistration(127.0.0.1:34649, datanodeUuid=ba5cd861-1676-4644-93dc-b5fe8d8e848d, infoPort=41531, infoSecurePort=0, ipcPort=39947, storageInfo=lv=-57;cid=testClusterID;nsid=1977276289;c=1690222233780), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:36,277 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x28c2788337ee7686: Processing first storage report for DS-8057c6da-0b27-4095-80a1-b2a7585a44d7 from datanode ba5cd861-1676-4644-93dc-b5fe8d8e848d 2023-07-24 18:10:36,277 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x28c2788337ee7686: from storage DS-8057c6da-0b27-4095-80a1-b2a7585a44d7 node DatanodeRegistration(127.0.0.1:34649, datanodeUuid=ba5cd861-1676-4644-93dc-b5fe8d8e848d, infoPort=41531, infoSecurePort=0, ipcPort=39947, storageInfo=lv=-57;cid=testClusterID;nsid=1977276289;c=1690222233780), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:36,279 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb748faed9810981d: Processing first storage report for DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350 from datanode 616d8a0c-3e9f-456b-9225-0e95e7fa9e0e 2023-07-24 18:10:36,279 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb748faed9810981d: from storage DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350 node DatanodeRegistration(127.0.0.1:43241, datanodeUuid=616d8a0c-3e9f-456b-9225-0e95e7fa9e0e, infoPort=33467, infoSecurePort=0, ipcPort=44627, storageInfo=lv=-57;cid=testClusterID;nsid=1977276289;c=1690222233780), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:36,279 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb748faed9810981d: Processing first storage report for DS-c953c213-1e32-4acd-97ef-5e486cd2ea06 from datanode 616d8a0c-3e9f-456b-9225-0e95e7fa9e0e 2023-07-24 18:10:36,279 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb748faed9810981d: from storage DS-c953c213-1e32-4acd-97ef-5e486cd2ea06 node DatanodeRegistration(127.0.0.1:43241, datanodeUuid=616d8a0c-3e9f-456b-9225-0e95e7fa9e0e, infoPort=33467, infoSecurePort=0, ipcPort=44627, storageInfo=lv=-57;cid=testClusterID;nsid=1977276289;c=1690222233780), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 18:10:36,542 DEBUG [Listener at localhost/44627] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e 2023-07-24 18:10:36,608 INFO [Listener at localhost/44627] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/zookeeper_0, clientPort=59012, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 18:10:36,623 INFO [Listener at localhost/44627] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=59012 2023-07-24 18:10:36,633 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:36,635 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:37,295 INFO [Listener at localhost/44627] util.FSUtils(471): Created version file at hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 with version=8 2023-07-24 18:10:37,296 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/hbase-staging 2023-07-24 18:10:37,304 DEBUG [Listener at localhost/44627] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 18:10:37,304 DEBUG [Listener at localhost/44627] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 18:10:37,304 DEBUG [Listener at localhost/44627] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 18:10:37,304 DEBUG [Listener at localhost/44627] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 18:10:37,718 INFO [Listener at localhost/44627] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-24 18:10:38,401 INFO [Listener at localhost/44627] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:38,438 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:38,439 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:38,439 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:38,439 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:38,439 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:38,641 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:38,735 DEBUG [Listener at localhost/44627] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-24 18:10:38,831 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34677 2023-07-24 18:10:38,841 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:38,843 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:38,865 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34677 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:10:38,907 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:346770x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:38,910 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34677-0x101988716b40000 connected 2023-07-24 18:10:38,936 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:38,937 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:38,941 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:38,949 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34677 2023-07-24 18:10:38,950 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34677 2023-07-24 18:10:38,950 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34677 2023-07-24 18:10:38,951 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34677 2023-07-24 18:10:38,951 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34677 2023-07-24 18:10:38,983 INFO [Listener at localhost/44627] log.Log(170): Logging initialized @7203ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-24 18:10:39,115 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:39,116 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:39,116 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:39,118 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 18:10:39,119 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:39,119 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:39,122 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:39,185 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 34023 2023-07-24 18:10:39,187 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:39,216 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,219 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@826539b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:39,220 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,220 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@43edd3da{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:39,391 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:39,403 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:39,403 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:39,405 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:39,412 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,437 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@633d5d38{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-34023-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2277075608991119138/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:10:39,449 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@6a462da6{HTTP/1.1, (http/1.1)}{0.0.0.0:34023} 2023-07-24 18:10:39,450 INFO [Listener at localhost/44627] server.Server(415): Started @7669ms 2023-07-24 18:10:39,453 INFO [Listener at localhost/44627] master.HMaster(444): hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9, hbase.cluster.distributed=false 2023-07-24 18:10:39,528 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:39,528 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:39,528 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:39,529 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:39,529 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:39,529 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:39,535 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:39,539 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43449 2023-07-24 18:10:39,542 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:39,552 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:39,554 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:39,556 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:39,558 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43449 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:10:39,563 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:434490x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:39,564 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43449-0x101988716b40001 connected 2023-07-24 18:10:39,565 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:39,566 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:39,567 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:39,568 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43449 2023-07-24 18:10:39,568 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43449 2023-07-24 18:10:39,568 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43449 2023-07-24 18:10:39,569 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43449 2023-07-24 18:10:39,569 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43449 2023-07-24 18:10:39,571 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:39,571 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:39,572 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:39,573 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:39,573 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:39,573 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:39,573 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:39,575 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 39369 2023-07-24 18:10:39,575 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:39,578 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,578 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6f1fe7bc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:39,578 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,579 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@fa9dc3d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:39,715 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:39,717 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:39,717 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:39,718 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:39,720 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,725 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2b321cdc{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-39369-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6642467478144959101/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:39,727 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@61e8112f{HTTP/1.1, (http/1.1)}{0.0.0.0:39369} 2023-07-24 18:10:39,727 INFO [Listener at localhost/44627] server.Server(415): Started @7947ms 2023-07-24 18:10:39,741 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:39,742 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:39,742 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:39,743 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:39,743 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:39,743 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:39,743 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:39,745 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35913 2023-07-24 18:10:39,745 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:39,746 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:39,747 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:39,749 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:39,750 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35913 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:10:39,754 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:359130x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:39,755 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35913-0x101988716b40002 connected 2023-07-24 18:10:39,755 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:39,756 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:39,757 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:39,757 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35913 2023-07-24 18:10:39,758 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35913 2023-07-24 18:10:39,758 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35913 2023-07-24 18:10:39,758 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35913 2023-07-24 18:10:39,758 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35913 2023-07-24 18:10:39,762 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:39,762 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:39,762 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:39,763 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:39,763 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:39,763 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:39,763 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:39,764 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 42319 2023-07-24 18:10:39,764 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:39,768 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,768 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3a4c486{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:39,769 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,769 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4677833f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:39,891 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:39,892 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:39,892 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:39,893 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:39,894 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,894 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6b2e3c5c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-42319-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5144626473051272159/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:39,896 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@74cb4810{HTTP/1.1, (http/1.1)}{0.0.0.0:42319} 2023-07-24 18:10:39,896 INFO [Listener at localhost/44627] server.Server(415): Started @8115ms 2023-07-24 18:10:39,908 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:39,909 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:39,909 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:39,909 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:39,909 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:39,910 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:39,910 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:39,911 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34741 2023-07-24 18:10:39,912 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:39,912 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:39,914 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:39,916 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:39,917 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34741 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:10:39,922 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:347410x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:39,923 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:347410x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:39,924 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34741-0x101988716b40003 connected 2023-07-24 18:10:39,924 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:39,925 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:39,925 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34741 2023-07-24 18:10:39,926 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34741 2023-07-24 18:10:39,927 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34741 2023-07-24 18:10:39,928 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34741 2023-07-24 18:10:39,928 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34741 2023-07-24 18:10:39,930 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:39,930 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:39,930 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:39,931 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:39,931 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:39,931 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:39,931 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:39,932 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 43923 2023-07-24 18:10:39,932 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:39,933 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,934 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3f1afb19{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:39,934 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:39,934 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d2b850b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:40,052 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:40,053 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:40,054 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:40,054 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:40,055 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:40,056 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3749e1fd{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-43923-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8246214671617832960/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:40,057 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@595b0d99{HTTP/1.1, (http/1.1)}{0.0.0.0:43923} 2023-07-24 18:10:40,057 INFO [Listener at localhost/44627] server.Server(415): Started @8277ms 2023-07-24 18:10:40,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:40,067 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@64a59ec9{HTTP/1.1, (http/1.1)}{0.0.0.0:39229} 2023-07-24 18:10:40,067 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8287ms 2023-07-24 18:10:40,068 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:40,077 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:10:40,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:40,097 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:40,098 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:40,097 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:40,098 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:40,098 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:40,100 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:10:40,101 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34677,1690222237492 from backup master directory 2023-07-24 18:10:40,101 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:10:40,105 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:40,105 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:10:40,106 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:40,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:40,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-24 18:10:40,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-24 18:10:40,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/hbase.id with ID: c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:10:40,299 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:40,316 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:40,370 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x439cd75f to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:40,398 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3942ed2c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:40,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:40,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 18:10:40,446 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-24 18:10:40,446 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-24 18:10:40,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:10:40,452 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:10:40,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:40,493 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store-tmp 2023-07-24 18:10:40,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:40,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:10:40,538 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:40,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:40,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:10:40,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:40,538 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:40,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:10:40,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:40,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34677%2C1690222237492, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,34677,1690222237492, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/oldWALs, maxLogs=10 2023-07-24 18:10:40,618 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:10:40,618 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:10:40,618 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:10:40,626 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:10:40,710 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,34677,1690222237492/jenkins-hbase4.apache.org%2C34677%2C1690222237492.1690222240574 2023-07-24 18:10:40,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK]] 2023-07-24 18:10:40,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:40,711 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:40,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:40,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:40,782 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:40,789 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 18:10:40,823 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 18:10:40,840 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:40,845 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:40,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:40,865 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:40,869 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:40,871 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9404644960, jitterRate=-0.1241241842508316}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:40,871 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:10:40,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 18:10:40,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 18:10:40,905 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 18:10:40,908 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 18:10:40,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-24 18:10:40,953 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 42 msec 2023-07-24 18:10:40,953 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 18:10:40,986 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 18:10:40,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 18:10:41,000 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 18:10:41,006 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 18:10:41,011 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 18:10:41,014 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:41,015 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 18:10:41,015 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 18:10:41,029 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 18:10:41,034 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:41,034 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:41,034 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:41,034 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:41,034 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:41,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34677,1690222237492, sessionid=0x101988716b40000, setting cluster-up flag (Was=false) 2023-07-24 18:10:41,052 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:41,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 18:10:41,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:41,066 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:41,073 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 18:10:41,075 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:41,078 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.hbase-snapshot/.tmp 2023-07-24 18:10:41,155 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 18:10:41,165 INFO [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:10:41,165 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:10:41,165 INFO [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:10:41,167 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 18:10:41,169 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:41,171 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 18:10:41,171 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 18:10:41,172 DEBUG [RS:2;jenkins-hbase4:34741] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:41,172 DEBUG [RS:1;jenkins-hbase4:35913] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:41,172 DEBUG [RS:0;jenkins-hbase4:43449] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:41,180 DEBUG [RS:2;jenkins-hbase4:34741] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:41,180 DEBUG [RS:0;jenkins-hbase4:43449] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:41,180 DEBUG [RS:0;jenkins-hbase4:43449] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:41,180 DEBUG [RS:1;jenkins-hbase4:35913] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:41,180 DEBUG [RS:2;jenkins-hbase4:34741] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:41,180 DEBUG [RS:1;jenkins-hbase4:35913] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:41,185 DEBUG [RS:1;jenkins-hbase4:35913] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:41,185 DEBUG [RS:2;jenkins-hbase4:34741] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:41,185 DEBUG [RS:0;jenkins-hbase4:43449] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:41,187 DEBUG [RS:1;jenkins-hbase4:35913] zookeeper.ReadOnlyZKClient(139): Connect 0x3d166dae to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:41,187 DEBUG [RS:2;jenkins-hbase4:34741] zookeeper.ReadOnlyZKClient(139): Connect 0x049f4959 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:41,187 DEBUG [RS:0;jenkins-hbase4:43449] zookeeper.ReadOnlyZKClient(139): Connect 0x55fcd743 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:41,195 DEBUG [RS:1;jenkins-hbase4:35913] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7814e468, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:41,195 DEBUG [RS:2;jenkins-hbase4:34741] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@764593a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:41,196 DEBUG [RS:1;jenkins-hbase4:35913] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f0bc542, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:41,196 DEBUG [RS:2;jenkins-hbase4:34741] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@715976d4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:41,199 DEBUG [RS:0;jenkins-hbase4:43449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e9a6aa8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:41,199 DEBUG [RS:0;jenkins-hbase4:43449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@301823f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:41,224 DEBUG [RS:2;jenkins-hbase4:34741] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:34741 2023-07-24 18:10:41,225 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:43449 2023-07-24 18:10:41,225 DEBUG [RS:1;jenkins-hbase4:35913] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35913 2023-07-24 18:10:41,230 INFO [RS:1;jenkins-hbase4:35913] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:41,230 INFO [RS:0;jenkins-hbase4:43449] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:41,230 INFO [RS:2;jenkins-hbase4:34741] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:41,232 INFO [RS:2;jenkins-hbase4:34741] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:41,232 INFO [RS:0;jenkins-hbase4:43449] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:41,232 INFO [RS:1;jenkins-hbase4:35913] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:41,232 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:41,232 DEBUG [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:41,232 DEBUG [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:41,236 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34677,1690222237492 with isa=jenkins-hbase4.apache.org/172.31.14.131:43449, startcode=1690222239527 2023-07-24 18:10:41,236 INFO [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34677,1690222237492 with isa=jenkins-hbase4.apache.org/172.31.14.131:35913, startcode=1690222239741 2023-07-24 18:10:41,236 INFO [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34677,1690222237492 with isa=jenkins-hbase4.apache.org/172.31.14.131:34741, startcode=1690222239908 2023-07-24 18:10:41,258 DEBUG [RS:1;jenkins-hbase4:35913] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:41,258 DEBUG [RS:2;jenkins-hbase4:34741] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:41,259 DEBUG [RS:0;jenkins-hbase4:43449] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:41,273 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 18:10:41,332 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:10:41,333 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51015, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:41,333 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59039, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:41,333 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56805, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:41,338 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:10:41,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:10:41,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:10:41,342 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:41,343 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:41,343 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:41,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:41,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:41,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 18:10:41,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:41,344 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,351 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690222271351 2023-07-24 18:10:41,353 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:41,354 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 18:10:41,355 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:41,359 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 18:10:41,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 18:10:41,360 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 18:10:41,363 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:41,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 18:10:41,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 18:10:41,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 18:10:41,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 18:10:41,372 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 18:10:41,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 18:10:41,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 18:10:41,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 18:10:41,379 DEBUG [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 18:10:41,380 DEBUG [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 18:10:41,380 WARN [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 18:10:41,380 WARN [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 18:10:41,379 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 18:10:41,381 WARN [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 18:10:41,381 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 18:10:41,383 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222241383,5,FailOnTimeoutGroup] 2023-07-24 18:10:41,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222241383,5,FailOnTimeoutGroup] 2023-07-24 18:10:41,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 18:10:41,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,459 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:41,461 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:41,461 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:10:41,484 INFO [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34677,1690222237492 with isa=jenkins-hbase4.apache.org/172.31.14.131:34741, startcode=1690222239908 2023-07-24 18:10:41,484 INFO [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34677,1690222237492 with isa=jenkins-hbase4.apache.org/172.31.14.131:35913, startcode=1690222239741 2023-07-24 18:10:41,484 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34677,1690222237492 with isa=jenkins-hbase4.apache.org/172.31.14.131:43449, startcode=1690222239527 2023-07-24 18:10:41,487 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:41,490 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:10:41,491 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34677] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:41,492 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info 2023-07-24 18:10:41,493 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:41,493 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:10:41,494 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 18:10:41,495 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:41,496 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:10:41,499 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:41,499 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:10:41,501 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:41,501 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34677] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:41,501 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:10:41,501 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:41,502 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 18:10:41,502 DEBUG [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:10:41,502 DEBUG [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:10:41,503 DEBUG [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34023 2023-07-24 18:10:41,503 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34677] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:41,503 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:41,503 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 18:10:41,504 DEBUG [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:10:41,504 DEBUG [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:10:41,504 DEBUG [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34023 2023-07-24 18:10:41,505 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:10:41,505 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:10:41,505 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34023 2023-07-24 18:10:41,506 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table 2023-07-24 18:10:41,507 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:10:41,511 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:41,512 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:41,513 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:10:41,514 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:10:41,517 DEBUG [RS:0;jenkins-hbase4:43449] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:41,518 WARN [RS:0;jenkins-hbase4:43449] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:41,518 INFO [RS:0;jenkins-hbase4:43449] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:41,518 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:41,518 DEBUG [RS:2;jenkins-hbase4:34741] zookeeper.ZKUtil(162): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:41,519 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:10:41,518 DEBUG [RS:1;jenkins-hbase4:35913] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:41,521 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:10:41,519 WARN [RS:2;jenkins-hbase4:34741] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:41,520 WARN [RS:1;jenkins-hbase4:35913] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:41,524 INFO [RS:1;jenkins-hbase4:35913] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:41,525 DEBUG [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:41,523 INFO [RS:2;jenkins-hbase4:34741] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:41,525 DEBUG [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:41,527 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34741,1690222239908] 2023-07-24 18:10:41,527 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35913,1690222239741] 2023-07-24 18:10:41,527 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43449,1690222239527] 2023-07-24 18:10:41,540 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:41,541 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11461169440, jitterRate=0.06740458309650421}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:10:41,542 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:10:41,542 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:10:41,542 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:10:41,542 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:10:41,542 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:10:41,542 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:10:41,544 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:10:41,544 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:10:41,551 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 18:10:41,552 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 18:10:41,552 DEBUG [RS:1;jenkins-hbase4:35913] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:41,552 DEBUG [RS:2;jenkins-hbase4:34741] zookeeper.ZKUtil(162): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:41,552 DEBUG [RS:0;jenkins-hbase4:43449] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:41,553 DEBUG [RS:1;jenkins-hbase4:35913] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:41,553 DEBUG [RS:2;jenkins-hbase4:34741] zookeeper.ZKUtil(162): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:41,553 DEBUG [RS:0;jenkins-hbase4:43449] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:41,553 DEBUG [RS:1;jenkins-hbase4:35913] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:41,554 DEBUG [RS:0;jenkins-hbase4:43449] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:41,554 DEBUG [RS:2;jenkins-hbase4:34741] zookeeper.ZKUtil(162): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:41,562 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 18:10:41,569 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:41,569 DEBUG [RS:2;jenkins-hbase4:34741] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:41,569 DEBUG [RS:1;jenkins-hbase4:35913] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:41,578 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 18:10:41,582 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 18:10:41,582 INFO [RS:1;jenkins-hbase4:35913] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:41,583 INFO [RS:2;jenkins-hbase4:34741] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:41,582 INFO [RS:0;jenkins-hbase4:43449] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:41,608 INFO [RS:1;jenkins-hbase4:35913] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:41,608 INFO [RS:2;jenkins-hbase4:34741] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:41,608 INFO [RS:0;jenkins-hbase4:43449] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:41,616 INFO [RS:2;jenkins-hbase4:34741] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:41,616 INFO [RS:0;jenkins-hbase4:43449] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:41,617 INFO [RS:2;jenkins-hbase4:34741] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,616 INFO [RS:1;jenkins-hbase4:35913] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:41,617 INFO [RS:0;jenkins-hbase4:43449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,617 INFO [RS:1;jenkins-hbase4:35913] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,618 INFO [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:41,618 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:41,618 INFO [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:41,627 INFO [RS:2;jenkins-hbase4:34741] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,627 INFO [RS:1;jenkins-hbase4:35913] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,627 INFO [RS:0;jenkins-hbase4:43449] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,628 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,628 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,628 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,628 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,628 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,628 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,628 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:41,629 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:41,628 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,629 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:41,629 DEBUG [RS:1;jenkins-hbase4:35913] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,630 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,630 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,630 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,630 DEBUG [RS:0;jenkins-hbase4:43449] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,631 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,631 DEBUG [RS:2;jenkins-hbase4:34741] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:41,631 INFO [RS:1;jenkins-hbase4:35913] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,632 INFO [RS:1;jenkins-hbase4:35913] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,632 INFO [RS:1;jenkins-hbase4:35913] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,632 INFO [RS:0;jenkins-hbase4:43449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,634 INFO [RS:2;jenkins-hbase4:34741] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,634 INFO [RS:0;jenkins-hbase4:43449] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,634 INFO [RS:2;jenkins-hbase4:34741] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,634 INFO [RS:0;jenkins-hbase4:43449] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,634 INFO [RS:2;jenkins-hbase4:34741] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,653 INFO [RS:2;jenkins-hbase4:34741] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:41,653 INFO [RS:1;jenkins-hbase4:35913] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:41,653 INFO [RS:0;jenkins-hbase4:43449] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:41,657 INFO [RS:2;jenkins-hbase4:34741] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34741,1690222239908-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,657 INFO [RS:1;jenkins-hbase4:35913] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35913,1690222239741-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,657 INFO [RS:0;jenkins-hbase4:43449] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43449,1690222239527-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:41,684 INFO [RS:1;jenkins-hbase4:35913] regionserver.Replication(203): jenkins-hbase4.apache.org,35913,1690222239741 started 2023-07-24 18:10:41,684 INFO [RS:0;jenkins-hbase4:43449] regionserver.Replication(203): jenkins-hbase4.apache.org,43449,1690222239527 started 2023-07-24 18:10:41,684 INFO [RS:2;jenkins-hbase4:34741] regionserver.Replication(203): jenkins-hbase4.apache.org,34741,1690222239908 started 2023-07-24 18:10:41,684 INFO [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35913,1690222239741, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35913, sessionid=0x101988716b40002 2023-07-24 18:10:41,687 INFO [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34741,1690222239908, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34741, sessionid=0x101988716b40003 2023-07-24 18:10:41,686 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43449,1690222239527, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43449, sessionid=0x101988716b40001 2023-07-24 18:10:41,687 DEBUG [RS:2;jenkins-hbase4:34741] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:41,687 DEBUG [RS:1;jenkins-hbase4:35913] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:41,687 DEBUG [RS:0;jenkins-hbase4:43449] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:41,687 DEBUG [RS:1;jenkins-hbase4:35913] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:41,687 DEBUG [RS:2;jenkins-hbase4:34741] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:41,688 DEBUG [RS:1;jenkins-hbase4:35913] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35913,1690222239741' 2023-07-24 18:10:41,688 DEBUG [RS:0;jenkins-hbase4:43449] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:41,690 DEBUG [RS:1;jenkins-hbase4:35913] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:41,688 DEBUG [RS:2;jenkins-hbase4:34741] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34741,1690222239908' 2023-07-24 18:10:41,690 DEBUG [RS:0;jenkins-hbase4:43449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43449,1690222239527' 2023-07-24 18:10:41,690 DEBUG [RS:2;jenkins-hbase4:34741] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:41,690 DEBUG [RS:0;jenkins-hbase4:43449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:41,691 DEBUG [RS:1;jenkins-hbase4:35913] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:41,691 DEBUG [RS:0;jenkins-hbase4:43449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:41,691 DEBUG [RS:2;jenkins-hbase4:34741] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:41,691 DEBUG [RS:0;jenkins-hbase4:43449] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:41,692 DEBUG [RS:0;jenkins-hbase4:43449] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:41,692 DEBUG [RS:0;jenkins-hbase4:43449] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:41,692 DEBUG [RS:0;jenkins-hbase4:43449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43449,1690222239527' 2023-07-24 18:10:41,692 DEBUG [RS:0;jenkins-hbase4:43449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:41,692 DEBUG [RS:2;jenkins-hbase4:34741] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:41,692 DEBUG [RS:2;jenkins-hbase4:34741] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:41,692 DEBUG [RS:2;jenkins-hbase4:34741] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:41,692 DEBUG [RS:2;jenkins-hbase4:34741] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34741,1690222239908' 2023-07-24 18:10:41,692 DEBUG [RS:2;jenkins-hbase4:34741] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:41,692 DEBUG [RS:1;jenkins-hbase4:35913] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:41,692 DEBUG [RS:1;jenkins-hbase4:35913] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:41,692 DEBUG [RS:0;jenkins-hbase4:43449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:41,692 DEBUG [RS:1;jenkins-hbase4:35913] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:41,692 DEBUG [RS:1;jenkins-hbase4:35913] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35913,1690222239741' 2023-07-24 18:10:41,692 DEBUG [RS:1;jenkins-hbase4:35913] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:41,692 DEBUG [RS:2;jenkins-hbase4:34741] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:41,693 DEBUG [RS:0;jenkins-hbase4:43449] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:41,693 INFO [RS:0;jenkins-hbase4:43449] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:10:41,693 INFO [RS:0;jenkins-hbase4:43449] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:10:41,693 DEBUG [RS:1;jenkins-hbase4:35913] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:41,694 DEBUG [RS:1;jenkins-hbase4:35913] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:41,694 INFO [RS:1;jenkins-hbase4:35913] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:10:41,694 INFO [RS:1;jenkins-hbase4:35913] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:10:41,694 DEBUG [RS:2;jenkins-hbase4:34741] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:41,694 INFO [RS:2;jenkins-hbase4:34741] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:10:41,694 INFO [RS:2;jenkins-hbase4:34741] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:10:41,734 DEBUG [jenkins-hbase4:34677] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 18:10:41,748 DEBUG [jenkins-hbase4:34677] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:41,750 DEBUG [jenkins-hbase4:34677] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:41,750 DEBUG [jenkins-hbase4:34677] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:41,750 DEBUG [jenkins-hbase4:34677] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:41,750 DEBUG [jenkins-hbase4:34677] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:41,754 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34741,1690222239908, state=OPENING 2023-07-24 18:10:41,763 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 18:10:41,765 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:41,766 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:41,770 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34741,1690222239908}] 2023-07-24 18:10:41,845 INFO [RS:2;jenkins-hbase4:34741] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34741%2C1690222239908, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34741,1690222239908, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:10:41,845 INFO [RS:0;jenkins-hbase4:43449] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43449%2C1690222239527, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,43449,1690222239527, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:10:41,845 INFO [RS:1;jenkins-hbase4:35913] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35913%2C1690222239741, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35913,1690222239741, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:10:41,872 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:10:41,873 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:10:41,873 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:10:41,881 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:10:41,881 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:10:41,882 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:10:41,889 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:10:41,890 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:10:41,890 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:10:41,897 INFO [RS:2;jenkins-hbase4:34741] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34741,1690222239908/jenkins-hbase4.apache.org%2C34741%2C1690222239908.1690222241849 2023-07-24 18:10:41,898 INFO [RS:1;jenkins-hbase4:35913] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35913,1690222239741/jenkins-hbase4.apache.org%2C35913%2C1690222239741.1690222241849 2023-07-24 18:10:41,897 INFO [RS:0;jenkins-hbase4:43449] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,43449,1690222239527/jenkins-hbase4.apache.org%2C43449%2C1690222239527.1690222241849 2023-07-24 18:10:41,898 DEBUG [RS:2;jenkins-hbase4:34741] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK]] 2023-07-24 18:10:41,899 DEBUG [RS:1;jenkins-hbase4:35913] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK]] 2023-07-24 18:10:41,899 DEBUG [RS:0;jenkins-hbase4:43449] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK]] 2023-07-24 18:10:41,947 WARN [ReadOnlyZKClient-127.0.0.1:59012@0x439cd75f] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 18:10:41,985 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:41,987 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:41,991 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53762, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:41,995 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34677,1690222237492] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:42,011 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53774, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:42,016 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34741] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:53774 deadline: 1690222302015, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:42,021 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 18:10:42,022 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:42,025 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34741%2C1690222239908.meta, suffix=.meta, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34741,1690222239908, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:10:42,049 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:10:42,049 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:10:42,049 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:10:42,056 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34741,1690222239908/jenkins-hbase4.apache.org%2C34741%2C1690222239908.meta.1690222242027.meta 2023-07-24 18:10:42,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK]] 2023-07-24 18:10:42,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:42,059 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:42,062 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 18:10:42,064 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 18:10:42,069 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 18:10:42,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:42,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 18:10:42,070 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 18:10:42,073 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:10:42,075 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info 2023-07-24 18:10:42,075 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info 2023-07-24 18:10:42,076 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:10:42,077 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:42,077 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:10:42,078 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:42,078 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:42,079 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:10:42,080 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:42,080 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:10:42,081 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table 2023-07-24 18:10:42,081 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table 2023-07-24 18:10:42,082 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:10:42,082 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:42,084 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:10:42,087 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:10:42,090 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:10:42,093 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:10:42,094 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10890324160, jitterRate=0.014240473508834839}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:10:42,095 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:10:42,105 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690222241980 2023-07-24 18:10:42,128 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 18:10:42,129 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 18:10:42,130 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34741,1690222239908, state=OPEN 2023-07-24 18:10:42,133 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:10:42,133 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:42,150 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 18:10:42,150 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34741,1690222239908 in 363 msec 2023-07-24 18:10:42,161 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 18:10:42,161 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 590 msec 2023-07-24 18:10:42,168 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 986 msec 2023-07-24 18:10:42,168 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690222242168, completionTime=-1 2023-07-24 18:10:42,168 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 18:10:42,168 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 18:10:42,230 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 18:10:42,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690222302230 2023-07-24 18:10:42,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690222362231 2023-07-24 18:10:42,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 62 msec 2023-07-24 18:10:42,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34677,1690222237492-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:42,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34677,1690222237492-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:42,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34677,1690222237492-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:42,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34677, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:42,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:42,269 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 18:10:42,281 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 18:10:42,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:42,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 18:10:42,300 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:42,304 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:42,322 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:42,325 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 empty. 2023-07-24 18:10:42,326 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:42,326 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 18:10:42,383 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:42,385 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => b3e0fb36cbe9750f5f2b47d078547932, NAME => 'hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:42,403 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:42,403 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing b3e0fb36cbe9750f5f2b47d078547932, disabling compactions & flushes 2023-07-24 18:10:42,403 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:42,403 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:42,403 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. after waiting 0 ms 2023-07-24 18:10:42,403 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:42,403 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:42,403 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:10:42,407 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:42,422 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222242410"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222242410"}]},"ts":"1690222242410"} 2023-07-24 18:10:42,450 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:42,452 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:42,458 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222242452"}]},"ts":"1690222242452"} 2023-07-24 18:10:42,464 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 18:10:42,468 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:42,469 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:42,469 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:42,469 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:42,469 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:42,471 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN}] 2023-07-24 18:10:42,474 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN 2023-07-24 18:10:42,476 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35913,1690222239741; forceNewPlan=false, retain=false 2023-07-24 18:10:42,536 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34677,1690222237492] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:42,539 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34677,1690222237492] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 18:10:42,541 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:42,543 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:42,546 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:42,547 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 empty. 2023-07-24 18:10:42,548 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:42,548 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 18:10:42,572 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:42,574 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f93db382913b37f9661cac1fd8ee01a9, NAME => 'hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:42,596 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:42,596 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing f93db382913b37f9661cac1fd8ee01a9, disabling compactions & flushes 2023-07-24 18:10:42,596 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:42,596 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:42,597 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. after waiting 0 ms 2023-07-24 18:10:42,597 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:42,597 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:42,597 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:10:42,601 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:42,602 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222242602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222242602"}]},"ts":"1690222242602"} 2023-07-24 18:10:42,606 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:42,608 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:42,608 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222242608"}]},"ts":"1690222242608"} 2023-07-24 18:10:42,614 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 18:10:42,619 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:42,620 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:42,620 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:42,620 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:42,620 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:42,620 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN}] 2023-07-24 18:10:42,622 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN 2023-07-24 18:10:42,624 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34741,1690222239908; forceNewPlan=false, retain=false 2023-07-24 18:10:42,625 INFO [jenkins-hbase4:34677] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 18:10:42,626 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:42,626 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:42,627 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222242626"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222242626"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222242626"}]},"ts":"1690222242626"} 2023-07-24 18:10:42,627 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222242626"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222242626"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222242626"}]},"ts":"1690222242626"} 2023-07-24 18:10:42,630 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,34741,1690222239908}] 2023-07-24 18:10:42,634 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=5, state=RUNNABLE; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:42,789 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:42,789 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:42,799 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58264, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:42,799 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:42,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f93db382913b37f9661cac1fd8ee01a9, NAME => 'hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:42,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:42,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. service=MultiRowMutationService 2023-07-24 18:10:42,801 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 18:10:42,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:42,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:42,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:42,802 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:42,805 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:42,806 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:42,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3e0fb36cbe9750f5f2b47d078547932, NAME => 'hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:42,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:42,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:42,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:42,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:42,807 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m 2023-07-24 18:10:42,807 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m 2023-07-24 18:10:42,808 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f93db382913b37f9661cac1fd8ee01a9 columnFamilyName m 2023-07-24 18:10:42,809 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:42,809 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(310): Store=f93db382913b37f9661cac1fd8ee01a9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:42,811 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:42,812 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:10:42,812 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:42,812 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:10:42,812 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3e0fb36cbe9750f5f2b47d078547932 columnFamilyName info 2023-07-24 18:10:42,813 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(310): Store=b3e0fb36cbe9750f5f2b47d078547932/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:42,814 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:42,816 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:42,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:42,821 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:42,821 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:42,822 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f93db382913b37f9661cac1fd8ee01a9; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3e78545a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:42,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:10:42,825 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9., pid=8, masterSystemTime=1690222242785 2023-07-24 18:10:42,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:42,826 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3e0fb36cbe9750f5f2b47d078547932; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11321463040, jitterRate=0.05439341068267822}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:42,826 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:10:42,828 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932., pid=9, masterSystemTime=1690222242789 2023-07-24 18:10:42,830 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:42,830 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:42,832 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:42,832 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:42,833 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:42,833 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222242831"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222242831"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222242831"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222242831"}]},"ts":"1690222242831"} 2023-07-24 18:10:42,834 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:42,834 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222242834"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222242834"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222242834"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222242834"}]},"ts":"1690222242834"} 2023-07-24 18:10:42,842 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-24 18:10:42,843 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,34741,1690222239908 in 206 msec 2023-07-24 18:10:42,846 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=5 2023-07-24 18:10:42,846 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=5, state=SUCCESS; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,35913,1690222239741 in 205 msec 2023-07-24 18:10:42,849 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-24 18:10:42,849 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN in 223 msec 2023-07-24 18:10:42,850 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 18:10:42,851 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN in 375 msec 2023-07-24 18:10:42,851 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:42,851 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222242851"}]},"ts":"1690222242851"} 2023-07-24 18:10:42,853 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:42,853 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222242853"}]},"ts":"1690222242853"} 2023-07-24 18:10:42,855 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 18:10:42,857 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 18:10:42,858 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:42,860 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:42,862 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 322 msec 2023-07-24 18:10:42,863 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 575 msec 2023-07-24 18:10:42,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 18:10:42,902 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:42,903 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:42,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:42,927 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58278, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:42,941 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 18:10:42,947 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 18:10:42,948 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 18:10:42,962 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:42,968 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 36 msec 2023-07-24 18:10:42,974 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 18:10:42,986 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:42,993 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-07-24 18:10:43,000 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 18:10:43,003 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 18:10:43,003 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.896sec 2023-07-24 18:10:43,006 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 18:10:43,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 18:10:43,007 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 18:10:43,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34677,1690222237492-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 18:10:43,010 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34677,1690222237492-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 18:10:43,015 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:43,015 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,017 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:10:43,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 18:10:43,023 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 18:10:43,074 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(139): Connect 0x0babe21e to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:43,082 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@76d3efca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:43,099 DEBUG [hconnection-0x1e045fb3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:43,116 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53776, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:43,126 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:43,127 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:43,137 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 18:10:43,140 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54766, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 18:10:43,153 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 18:10:43,153 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:43,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 18:10:43,159 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(139): Connect 0x6d6bb4b4 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:43,164 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19d31601, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:43,165 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:10:43,168 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:43,169 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101988716b4000a connected 2023-07-24 18:10:43,215 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=423, OpenFileDescriptor=677, MaxFileDescriptor=60000, SystemLoadAverage=617, ProcessCount=177, AvailableMemoryMB=5685 2023-07-24 18:10:43,217 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testClearNotProcessedDeadServer 2023-07-24 18:10:43,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,291 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 18:10:43,306 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:43,306 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:43,307 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:43,307 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:43,307 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:43,307 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:43,307 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:43,312 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41915 2023-07-24 18:10:43,312 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:43,314 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:43,315 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:43,319 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:43,328 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41915 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:10:43,338 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:419150x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:43,339 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(162): regionserver:419150x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:10:43,340 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41915-0x101988716b4000b connected 2023-07-24 18:10:43,341 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 18:10:43,342 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:43,362 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41915 2023-07-24 18:10:43,363 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41915 2023-07-24 18:10:43,363 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41915 2023-07-24 18:10:43,378 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41915 2023-07-24 18:10:43,378 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41915 2023-07-24 18:10:43,381 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:43,381 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:43,381 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:43,381 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:43,382 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:43,382 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:43,382 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:43,382 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 46685 2023-07-24 18:10:43,382 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:43,384 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:43,384 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@50e6c6eb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:43,384 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:43,385 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7f60de8e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:43,513 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:43,514 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:43,514 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:43,515 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:43,515 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:43,516 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4988aa8e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-46685-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4706111284913177513/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:43,518 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@4ba2fb0a{HTTP/1.1, (http/1.1)}{0.0.0.0:46685} 2023-07-24 18:10:43,518 INFO [Listener at localhost/44627] server.Server(415): Started @11738ms 2023-07-24 18:10:43,522 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:10:43,523 DEBUG [RS:3;jenkins-hbase4:41915] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:43,527 DEBUG [RS:3;jenkins-hbase4:41915] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:43,527 DEBUG [RS:3;jenkins-hbase4:41915] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:43,530 DEBUG [RS:3;jenkins-hbase4:41915] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:43,532 DEBUG [RS:3;jenkins-hbase4:41915] zookeeper.ReadOnlyZKClient(139): Connect 0x3bf03c53 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:43,537 DEBUG [RS:3;jenkins-hbase4:41915] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3537c55d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:43,538 DEBUG [RS:3;jenkins-hbase4:41915] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31085723, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:43,549 DEBUG [RS:3;jenkins-hbase4:41915] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:41915 2023-07-24 18:10:43,549 INFO [RS:3;jenkins-hbase4:41915] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:43,549 INFO [RS:3;jenkins-hbase4:41915] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:43,549 DEBUG [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:43,550 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34677,1690222237492 with isa=jenkins-hbase4.apache.org/172.31.14.131:41915, startcode=1690222243305 2023-07-24 18:10:43,550 DEBUG [RS:3;jenkins-hbase4:41915] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:43,557 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54651, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:43,558 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34677] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:43,558 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:43,558 DEBUG [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:10:43,558 DEBUG [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:10:43,559 DEBUG [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34023 2023-07-24 18:10:43,566 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:43,566 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:43,566 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:43,566 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:43,566 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,567 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41915,1690222243305] 2023-07-24 18:10:43,567 DEBUG [RS:3;jenkins-hbase4:41915] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:43,567 WARN [RS:3;jenkins-hbase4:41915] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:43,567 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:10:43,567 INFO [RS:3;jenkins-hbase4:41915] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:43,567 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:43,567 DEBUG [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:43,567 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:43,567 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:43,568 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:43,576 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 18:10:43,576 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:43,576 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:43,576 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:43,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:43,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:43,578 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:43,579 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:43,579 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:43,582 DEBUG [RS:3;jenkins-hbase4:41915] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:43,582 DEBUG [RS:3;jenkins-hbase4:41915] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:43,583 DEBUG [RS:3;jenkins-hbase4:41915] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:43,583 DEBUG [RS:3;jenkins-hbase4:41915] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:43,584 DEBUG [RS:3;jenkins-hbase4:41915] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:43,584 INFO [RS:3;jenkins-hbase4:41915] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:43,588 INFO [RS:3;jenkins-hbase4:41915] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:43,588 INFO [RS:3;jenkins-hbase4:41915] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:43,588 INFO [RS:3;jenkins-hbase4:41915] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:43,588 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:43,590 INFO [RS:3;jenkins-hbase4:41915] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:43,591 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:43,591 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:43,591 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:43,591 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:43,591 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:43,591 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:43,591 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:43,591 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:43,591 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:43,592 DEBUG [RS:3;jenkins-hbase4:41915] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:43,593 INFO [RS:3;jenkins-hbase4:41915] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:43,594 INFO [RS:3;jenkins-hbase4:41915] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:43,594 INFO [RS:3;jenkins-hbase4:41915] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:43,605 INFO [RS:3;jenkins-hbase4:41915] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:43,605 INFO [RS:3;jenkins-hbase4:41915] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41915,1690222243305-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:43,616 INFO [RS:3;jenkins-hbase4:41915] regionserver.Replication(203): jenkins-hbase4.apache.org,41915,1690222243305 started 2023-07-24 18:10:43,616 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41915,1690222243305, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41915, sessionid=0x101988716b4000b 2023-07-24 18:10:43,616 DEBUG [RS:3;jenkins-hbase4:41915] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:43,617 DEBUG [RS:3;jenkins-hbase4:41915] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:43,617 DEBUG [RS:3;jenkins-hbase4:41915] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41915,1690222243305' 2023-07-24 18:10:43,617 DEBUG [RS:3;jenkins-hbase4:41915] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:43,617 DEBUG [RS:3;jenkins-hbase4:41915] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:43,618 DEBUG [RS:3;jenkins-hbase4:41915] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:43,618 DEBUG [RS:3;jenkins-hbase4:41915] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:43,618 DEBUG [RS:3;jenkins-hbase4:41915] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:43,618 DEBUG [RS:3;jenkins-hbase4:41915] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41915,1690222243305' 2023-07-24 18:10:43,618 DEBUG [RS:3;jenkins-hbase4:41915] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:43,618 DEBUG [RS:3;jenkins-hbase4:41915] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:43,618 DEBUG [RS:3;jenkins-hbase4:41915] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:43,618 INFO [RS:3;jenkins-hbase4:41915] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:10:43,618 INFO [RS:3;jenkins-hbase4:41915] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:10:43,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:43,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:43,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:43,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:43,638 DEBUG [hconnection-0x3122a34e-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:43,642 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53792, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:43,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:43,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:43,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:54766 deadline: 1690223443660, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:43,662 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:43,664 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:43,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,666 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34741, jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:43,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:43,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:43,674 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(260): testClearNotProcessedDeadServer 2023-07-24 18:10:43,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:43,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:43,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup deadServerGroup 2023-07-24 18:10:43,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:43,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 18:10:43,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:43,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:43,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34741] to rsgroup deadServerGroup 2023-07-24 18:10:43,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:43,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 18:10:43,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:43,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(238): Moving server region f93db382913b37f9661cac1fd8ee01a9, which do not belong to RSGroup deadServerGroup 2023-07-24 18:10:43,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:43,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:43,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:43,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:43,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:43,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, REOPEN/MOVE 2023-07-24 18:10:43,708 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, REOPEN/MOVE 2023-07-24 18:10:43,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup deadServerGroup 2023-07-24 18:10:43,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:43,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:43,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:43,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:43,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:43,709 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:43,709 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222243709"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222243709"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222243709"}]},"ts":"1690222243709"} 2023-07-24 18:10:43,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 18:10:43,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-24 18:10:43,711 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 18:10:43,712 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34741,1690222239908, state=CLOSING 2023-07-24 18:10:43,713 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,34741,1690222239908}] 2023-07-24 18:10:43,713 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:10:43,713 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:43,713 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34741,1690222239908}] 2023-07-24 18:10:43,720 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:43,722 INFO [RS:3;jenkins-hbase4:41915] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41915%2C1690222243305, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:10:43,745 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:10:43,745 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:10:43,745 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:10:43,752 INFO [RS:3;jenkins-hbase4:41915] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305/jenkins-hbase4.apache.org%2C41915%2C1690222243305.1690222243724 2023-07-24 18:10:43,752 DEBUG [RS:3;jenkins-hbase4:41915] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK]] 2023-07-24 18:10:43,877 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-24 18:10:43,878 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:10:43,878 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:10:43,878 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:10:43,878 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:10:43,878 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:10:43,879 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.85 KB heapSize=5.58 KB 2023-07-24 18:10:43,977 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.67 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/info/b7efcf27a4234e8cb81fe70d74c707cd 2023-07-24 18:10:44,079 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/table/ed4eee4aebd4497b91a21f8f303e8b08 2023-07-24 18:10:44,090 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/info/b7efcf27a4234e8cb81fe70d74c707cd as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/b7efcf27a4234e8cb81fe70d74c707cd 2023-07-24 18:10:44,101 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/b7efcf27a4234e8cb81fe70d74c707cd, entries=21, sequenceid=15, filesize=7.1 K 2023-07-24 18:10:44,104 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/table/ed4eee4aebd4497b91a21f8f303e8b08 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/ed4eee4aebd4497b91a21f8f303e8b08 2023-07-24 18:10:44,118 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/ed4eee4aebd4497b91a21f8f303e8b08, entries=4, sequenceid=15, filesize=4.8 K 2023-07-24 18:10:44,121 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.85 KB/2916, heapSize ~5.30 KB/5424, currentSize=0 B/0 for 1588230740 in 242ms, sequenceid=15, compaction requested=false 2023-07-24 18:10:44,123 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 18:10:44,137 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-24 18:10:44,138 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:44,139 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:10:44,139 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:10:44,139 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,41915,1690222243305 record at close sequenceid=15 2023-07-24 18:10:44,142 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-24 18:10:44,143 WARN [PEWorker-5] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-24 18:10:44,146 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-24 18:10:44,146 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34741,1690222239908 in 430 msec 2023-07-24 18:10:44,147 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41915,1690222243305; forceNewPlan=false, retain=false 2023-07-24 18:10:44,297 INFO [jenkins-hbase4:34677] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:44,297 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41915,1690222243305, state=OPENING 2023-07-24 18:10:44,299 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:10:44,299 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:44,299 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41915,1690222243305}] 2023-07-24 18:10:44,459 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:44,459 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:44,464 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39264, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:44,471 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 18:10:44,471 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:44,479 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41915%2C1690222243305.meta, suffix=.meta, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:10:44,500 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:10:44,501 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:10:44,502 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:10:44,507 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305/jenkins-hbase4.apache.org%2C41915%2C1690222243305.meta.1690222244481.meta 2023-07-24 18:10:44,509 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK]] 2023-07-24 18:10:44,509 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:44,510 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:44,510 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 18:10:44,510 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 18:10:44,510 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 18:10:44,510 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:44,510 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 18:10:44,510 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 18:10:44,513 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:10:44,514 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info 2023-07-24 18:10:44,514 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info 2023-07-24 18:10:44,515 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:10:44,527 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/b7efcf27a4234e8cb81fe70d74c707cd 2023-07-24 18:10:44,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:44,528 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:10:44,530 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:44,530 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:44,530 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:10:44,531 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:44,532 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:10:44,533 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table 2023-07-24 18:10:44,533 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table 2023-07-24 18:10:44,534 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:10:44,546 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/ed4eee4aebd4497b91a21f8f303e8b08 2023-07-24 18:10:44,546 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:44,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:10:44,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:10:44,554 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:10:44,556 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:10:44,558 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11401711200, jitterRate=0.06186710298061371}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:10:44,558 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:10:44,559 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=16, masterSystemTime=1690222244459 2023-07-24 18:10:44,567 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 18:10:44,568 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 18:10:44,569 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41915,1690222243305, state=OPEN 2023-07-24 18:10:44,570 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:10:44,570 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:44,574 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-24 18:10:44,574 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41915,1690222243305 in 271 msec 2023-07-24 18:10:44,577 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 865 msec 2023-07-24 18:10:44,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-24 18:10:44,723 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:44,724 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f93db382913b37f9661cac1fd8ee01a9, disabling compactions & flushes 2023-07-24 18:10:44,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:44,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:44,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. after waiting 0 ms 2023-07-24 18:10:44,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:44,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f93db382913b37f9661cac1fd8ee01a9 1/1 column families, dataSize=1.27 KB heapSize=2.24 KB 2023-07-24 18:10:44,782 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.27 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp/m/d5cd966a907b4e6e86b91fb7d6889add 2023-07-24 18:10:44,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp/m/d5cd966a907b4e6e86b91fb7d6889add as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d5cd966a907b4e6e86b91fb7d6889add 2023-07-24 18:10:44,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d5cd966a907b4e6e86b91fb7d6889add, entries=3, sequenceid=9, filesize=5.1 K 2023-07-24 18:10:44,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.27 KB/1298, heapSize ~2.23 KB/2280, currentSize=0 B/0 for f93db382913b37f9661cac1fd8ee01a9 in 134ms, sequenceid=9, compaction requested=false 2023-07-24 18:10:44,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 18:10:44,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-24 18:10:44,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:44,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:44,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:10:44,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f93db382913b37f9661cac1fd8ee01a9 move to jenkins-hbase4.apache.org,43449,1690222239527 record at close sequenceid=9 2023-07-24 18:10:44,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:44,895 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=CLOSED 2023-07-24 18:10:44,895 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222244895"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222244895"}]},"ts":"1690222244895"} 2023-07-24 18:10:44,896 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34741] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:53774 deadline: 1690222304896, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41915 startCode=1690222243305. As of locationSeqNum=15. 2023-07-24 18:10:44,998 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:44,999 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39276, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:45,007 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-24 18:10:45,007 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,34741,1690222239908 in 1.2910 sec 2023-07-24 18:10:45,008 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43449,1690222239527; forceNewPlan=false, retain=false 2023-07-24 18:10:45,158 INFO [jenkins-hbase4:34677] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:45,159 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:45,159 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222245159"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222245159"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222245159"}]},"ts":"1690222245159"} 2023-07-24 18:10:45,162 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE; OpenRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,43449,1690222239527}] 2023-07-24 18:10:45,318 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:45,318 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:45,322 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50994, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:45,327 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:45,327 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f93db382913b37f9661cac1fd8ee01a9, NAME => 'hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:45,327 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:45,327 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. service=MultiRowMutationService 2023-07-24 18:10:45,328 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 18:10:45,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:45,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:45,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:45,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:45,331 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:45,332 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m 2023-07-24 18:10:45,332 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m 2023-07-24 18:10:45,333 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f93db382913b37f9661cac1fd8ee01a9 columnFamilyName m 2023-07-24 18:10:45,353 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d5cd966a907b4e6e86b91fb7d6889add 2023-07-24 18:10:45,353 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(310): Store=f93db382913b37f9661cac1fd8ee01a9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:45,354 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:45,357 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:45,362 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:45,364 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f93db382913b37f9661cac1fd8ee01a9; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3b46cb64, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:45,364 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:10:45,365 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9., pid=17, masterSystemTime=1690222245318 2023-07-24 18:10:45,370 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:45,370 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:45,371 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:45,371 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222245371"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222245371"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222245371"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222245371"}]},"ts":"1690222245371"} 2023-07-24 18:10:45,377 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-24 18:10:45,377 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; OpenRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,43449,1690222239527 in 212 msec 2023-07-24 18:10:45,380 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, REOPEN/MOVE in 1.6710 sec 2023-07-24 18:10:45,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34741,1690222239908] are moved back to default 2023-07-24 18:10:45,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(438): Move servers done: default => deadServerGroup 2023-07-24 18:10:45,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:45,714 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34741] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:53792 deadline: 1690222305714, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43449 startCode=1690222239527. As of locationSeqNum=9. 2023-07-24 18:10:45,818 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34741] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:53792 deadline: 1690222305818, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41915 startCode=1690222243305. As of locationSeqNum=15. 2023-07-24 18:10:45,920 DEBUG [hconnection-0x3122a34e-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:45,925 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39290, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:45,941 DEBUG [hconnection-0x3122a34e-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:45,948 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51008, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:45,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:45,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:45,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-24 18:10:45,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:45,973 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:45,975 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53808, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:45,976 INFO [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34741] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34741,1690222239908' ***** 2023-07-24 18:10:45,976 INFO [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34741] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x1e045fb3 2023-07-24 18:10:45,976 INFO [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:45,983 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:45,989 INFO [RS:2;jenkins-hbase4:34741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3749e1fd{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:45,994 INFO [RS:2;jenkins-hbase4:34741] server.AbstractConnector(383): Stopped ServerConnector@595b0d99{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:45,994 INFO [RS:2;jenkins-hbase4:34741] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:45,995 INFO [RS:2;jenkins-hbase4:34741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d2b850b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:45,996 INFO [RS:2;jenkins-hbase4:34741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3f1afb19{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:45,998 INFO [RS:2;jenkins-hbase4:34741] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:10:45,998 INFO [RS:2;jenkins-hbase4:34741] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:10:45,998 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:10:45,999 INFO [RS:2;jenkins-hbase4:34741] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:10:45,999 INFO [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:45,999 DEBUG [RS:2;jenkins-hbase4:34741] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x049f4959 to 127.0.0.1:59012 2023-07-24 18:10:45,999 DEBUG [RS:2;jenkins-hbase4:34741] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:45,999 INFO [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34741,1690222239908; all regions closed. 2023-07-24 18:10:46,020 DEBUG [RS:2;jenkins-hbase4:34741] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:10:46,020 INFO [RS:2;jenkins-hbase4:34741] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34741%2C1690222239908.meta:.meta(num 1690222242027) 2023-07-24 18:10:46,027 DEBUG [RS:2;jenkins-hbase4:34741] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:10:46,028 INFO [RS:2;jenkins-hbase4:34741] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34741%2C1690222239908:(num 1690222241849) 2023-07-24 18:10:46,028 DEBUG [RS:2;jenkins-hbase4:34741] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:46,028 INFO [RS:2;jenkins-hbase4:34741] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:46,029 INFO [RS:2;jenkins-hbase4:34741] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 18:10:46,029 INFO [RS:2;jenkins-hbase4:34741] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:10:46,029 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:46,029 INFO [RS:2;jenkins-hbase4:34741] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:10:46,029 INFO [RS:2;jenkins-hbase4:34741] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:10:46,030 INFO [RS:2;jenkins-hbase4:34741] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34741 2023-07-24 18:10:46,041 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:46,041 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:46,042 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:46,042 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:46,042 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:46,042 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 2023-07-24 18:10:46,042 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:46,042 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:46,042 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:46,042 ERROR [Listener at localhost/44627-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@149b92bf rejected from java.util.concurrent.ThreadPoolExecutor@10a414f4[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1374) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-24 18:10:46,043 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34741,1690222239908] 2023-07-24 18:10:46,044 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34741,1690222239908; numProcessing=1 2023-07-24 18:10:46,044 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,045 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,045 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,046 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34741,1690222239908 already deleted, retry=false 2023-07-24 18:10:46,046 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,046 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,046 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,046 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,34741,1690222239908 on jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:46,047 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,048 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 znode expired, triggering replicatorRemoved event 2023-07-24 18:10:46,050 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,053 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 znode expired, triggering replicatorRemoved event 2023-07-24 18:10:46,053 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,053 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,34741,1690222239908 znode expired, triggering replicatorRemoved event 2023-07-24 18:10:46,054 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,054 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,054 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:46,054 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,056 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,060 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,34741,1690222239908, splitWal=true, meta=false 2023-07-24 18:10:46,060 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=18 for jenkins-hbase4.apache.org,34741,1690222239908 (carryingMeta=false) jenkins-hbase4.apache.org,34741,1690222239908/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@686b2e52[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 18:10:46,061 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:46,064 WARN [RS-EventLoopGroup-5-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:34741 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:34741 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:10:46,064 DEBUG [RS-EventLoopGroup-5-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:34741 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:34741 2023-07-24 18:10:46,067 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=18, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,34741,1690222239908, splitWal=true, meta=false 2023-07-24 18:10:46,069 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,34741,1690222239908 had 0 regions 2023-07-24 18:10:46,071 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=18, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,34741,1690222239908, splitWal=true, meta=false, isMeta: false 2023-07-24 18:10:46,073 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34741,1690222239908-splitting 2023-07-24 18:10:46,074 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34741,1690222239908-splitting dir is empty, no logs to split. 2023-07-24 18:10:46,074 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,34741,1690222239908 WAL count=0, meta=false 2023-07-24 18:10:46,079 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34741,1690222239908-splitting dir is empty, no logs to split. 2023-07-24 18:10:46,079 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,34741,1690222239908 WAL count=0, meta=false 2023-07-24 18:10:46,079 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,34741,1690222239908 WAL splitting is done? wals=0, meta=false 2023-07-24 18:10:46,086 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,34741,1690222239908 failed, ignore...File hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34741,1690222239908-splitting does not exist. 2023-07-24 18:10:46,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-24 18:10:46,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:46,089 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,34741,1690222239908 after splitting done 2023-07-24 18:10:46,089 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,34741,1690222239908 from processing; numProcessing=0 2023-07-24 18:10:46,092 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,34741,1690222239908, splitWal=true, meta=false in 41 msec 2023-07-24 18:10:46,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:46,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:46,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:46,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:46,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:46,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:46,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:46,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:46,177 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:46,178 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51010, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:46,181 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,182 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:46,182 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 18:10:46,183 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:46,192 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 18:10:46,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 18:10:46,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:46,202 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:46,202 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34741-0x101988716b40003, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:46,202 INFO [RS:2;jenkins-hbase4:34741] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34741,1690222239908; zookeeper connection closed. 2023-07-24 18:10:46,203 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@76800bae] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@76800bae 2023-07-24 18:10:46,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:46,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:46,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:46,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:46,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34741] to rsgroup default 2023-07-24 18:10:46,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(258): Dropping jenkins-hbase4.apache.org:34741 during move-to-default rsgroup because not online 2023-07-24 18:10:46,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-24 18:10:46,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:46,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group deadServerGroup, current retry=0 2023-07-24 18:10:46,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(261): All regions from [] are moved back to deadServerGroup 2023-07-24 18:10:46,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(438): Move servers done: deadServerGroup => default 2023-07-24 18:10:46,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:46,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup deadServerGroup 2023-07-24 18:10:46,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:46,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:46,232 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 18:10:46,245 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:46,246 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:46,246 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:46,246 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:46,246 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:46,246 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:46,246 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:46,247 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37467 2023-07-24 18:10:46,248 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:46,249 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:46,250 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:46,251 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:46,252 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37467 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:10:46,256 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:374670x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:46,258 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37467-0x101988716b4000d connected 2023-07-24 18:10:46,258 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(162): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:10:46,258 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(162): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 18:10:46,259 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:46,269 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37467 2023-07-24 18:10:46,269 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37467 2023-07-24 18:10:46,269 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37467 2023-07-24 18:10:46,270 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37467 2023-07-24 18:10:46,272 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37467 2023-07-24 18:10:46,274 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:46,275 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:46,275 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:46,275 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:46,275 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:46,275 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:46,276 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:46,276 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 34423 2023-07-24 18:10:46,276 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:46,282 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:46,282 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@497e6e88{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:46,283 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:46,283 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5cbf4294{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:46,407 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:46,407 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:46,408 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:46,408 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:46,409 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:46,410 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7af1f8e9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-34423-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7254756624270283345/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:46,412 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@6a64c925{HTTP/1.1, (http/1.1)}{0.0.0.0:34423} 2023-07-24 18:10:46,412 INFO [Listener at localhost/44627] server.Server(415): Started @14632ms 2023-07-24 18:10:46,416 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:10:46,418 DEBUG [RS:4;jenkins-hbase4:37467] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:46,425 DEBUG [RS:4;jenkins-hbase4:37467] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:46,425 DEBUG [RS:4;jenkins-hbase4:37467] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:46,428 DEBUG [RS:4;jenkins-hbase4:37467] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:46,430 DEBUG [RS:4;jenkins-hbase4:37467] zookeeper.ReadOnlyZKClient(139): Connect 0x6fc2f1f5 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:46,437 DEBUG [RS:4;jenkins-hbase4:37467] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6105127d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:46,437 DEBUG [RS:4;jenkins-hbase4:37467] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@653a9a85, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:46,448 DEBUG [RS:4;jenkins-hbase4:37467] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase4:37467 2023-07-24 18:10:46,448 INFO [RS:4;jenkins-hbase4:37467] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:46,448 INFO [RS:4;jenkins-hbase4:37467] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:46,448 DEBUG [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:46,449 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34677,1690222237492 with isa=jenkins-hbase4.apache.org/172.31.14.131:37467, startcode=1690222246245 2023-07-24 18:10:46,449 DEBUG [RS:4;jenkins-hbase4:37467] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:46,452 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42773, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:46,453 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34677] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:46,453 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:46,453 DEBUG [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:10:46,453 DEBUG [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:10:46,453 DEBUG [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34023 2023-07-24 18:10:46,455 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:46,455 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:46,456 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:46,455 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:46,457 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,457 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37467,1690222246245] 2023-07-24 18:10:46,457 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,458 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,458 DEBUG [RS:4;jenkins-hbase4:37467] zookeeper.ZKUtil(162): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:46,458 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,458 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,458 WARN [RS:4;jenkins-hbase4:37467] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:46,458 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:10:46,458 INFO [RS:4;jenkins-hbase4:37467] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:46,458 DEBUG [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:46,458 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,458 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,459 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:46,463 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,463 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:46,463 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34677,1690222237492] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 18:10:46,463 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:46,464 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,464 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,467 DEBUG [RS:4;jenkins-hbase4:37467] zookeeper.ZKUtil(162): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:46,467 DEBUG [RS:4;jenkins-hbase4:37467] zookeeper.ZKUtil(162): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:46,467 DEBUG [RS:4;jenkins-hbase4:37467] zookeeper.ZKUtil(162): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:46,468 DEBUG [RS:4;jenkins-hbase4:37467] zookeeper.ZKUtil(162): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,469 DEBUG [RS:4;jenkins-hbase4:37467] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:46,469 INFO [RS:4;jenkins-hbase4:37467] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:46,471 INFO [RS:4;jenkins-hbase4:37467] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:46,471 INFO [RS:4;jenkins-hbase4:37467] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:46,471 INFO [RS:4;jenkins-hbase4:37467] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:46,471 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:46,473 INFO [RS:4;jenkins-hbase4:37467] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:46,473 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:46,473 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:46,474 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:46,474 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:46,474 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:46,474 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:46,474 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:46,474 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:46,474 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:46,474 DEBUG [RS:4;jenkins-hbase4:37467] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:46,475 INFO [RS:4;jenkins-hbase4:37467] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:46,475 INFO [RS:4;jenkins-hbase4:37467] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:46,475 INFO [RS:4;jenkins-hbase4:37467] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:46,490 INFO [RS:4;jenkins-hbase4:37467] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:46,490 INFO [RS:4;jenkins-hbase4:37467] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37467,1690222246245-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:46,506 INFO [RS:4;jenkins-hbase4:37467] regionserver.Replication(203): jenkins-hbase4.apache.org,37467,1690222246245 started 2023-07-24 18:10:46,507 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37467,1690222246245, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37467, sessionid=0x101988716b4000d 2023-07-24 18:10:46,507 DEBUG [RS:4;jenkins-hbase4:37467] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:46,507 DEBUG [RS:4;jenkins-hbase4:37467] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:46,507 DEBUG [RS:4;jenkins-hbase4:37467] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37467,1690222246245' 2023-07-24 18:10:46,507 DEBUG [RS:4;jenkins-hbase4:37467] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:46,507 DEBUG [RS:4;jenkins-hbase4:37467] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:46,508 DEBUG [RS:4;jenkins-hbase4:37467] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:46,508 DEBUG [RS:4;jenkins-hbase4:37467] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:46,508 DEBUG [RS:4;jenkins-hbase4:37467] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:46,508 DEBUG [RS:4;jenkins-hbase4:37467] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37467,1690222246245' 2023-07-24 18:10:46,508 DEBUG [RS:4;jenkins-hbase4:37467] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:46,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:46,508 DEBUG [RS:4;jenkins-hbase4:37467] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:46,509 DEBUG [RS:4;jenkins-hbase4:37467] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:46,509 INFO [RS:4;jenkins-hbase4:37467] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:10:46,509 INFO [RS:4;jenkins-hbase4:37467] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:10:46,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:46,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:46,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:46,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:46,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:46,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:46,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:46,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 69 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:54766 deadline: 1690223446526, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:46,527 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:46,529 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:46,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:46,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:46,531 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:46,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:46,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:46,563 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=480 (was 423) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-882505426_17 at /127.0.0.1:36134 [Waiting for operation #16] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41915 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x485668fb-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-882505426_17 at /127.0.0.1:40054 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9-prefix:jenkins-hbase4.apache.org,41915,1690222243305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1459919364_17 at /127.0.0.1:60368 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2120374177-769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:4;jenkins-hbase4:37467-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1459919364_17 at /127.0.0.1:60452 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1270905017-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (449330859) connection to localhost/127.0.0.1:44619 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-882505426_17 at /127.0.0.1:60476 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1459919364_17 at /127.0.0.1:40066 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x485668fb-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120374177-771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1270905017-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1459919364_17 at /127.0.0.1:60440 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1270905017-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1641786643_17 at /127.0.0.1:60388 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1270905017-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1270905017-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (449330859) connection to localhost/127.0.0.1:44619 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x485668fb-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41915Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x3bf03c53-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1270905017-638-acceptor-0@2376f5c7-ServerConnector@4ba2fb0a{HTTP/1.1, (http/1.1)}{0.0.0.0:46685} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-882505426_17 at /127.0.0.1:60430 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1459919364_17 at /127.0.0.1:60340 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x6fc2f1f5-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp2120374177-770 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:4;jenkins-hbase4:37467 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1270905017-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9-prefix:jenkins-hbase4.apache.org,41915,1690222243305.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:44619 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120374177-767 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2120374177-772 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1641786643_17 at /127.0.0.1:60284 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120374177-768 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x3bf03c53-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2120374177-766-acceptor-0@155679dd-ServerConnector@6a64c925{HTTP/1.1, (http/1.1)}{0.0.0.0:34423} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1270905017-637 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37467 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x3bf03c53 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37467Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1459919364_17 at /127.0.0.1:40034 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41915 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-dd091d5-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x485668fb-metaLookup-shared--pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x6fc2f1f5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2120374177-765 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41915-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x6fc2f1f5-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-1eb94391-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=752 (was 677) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=599 (was 617), ProcessCount=177 (was 177), AvailableMemoryMB=5573 (was 5685) 2023-07-24 18:10:46,580 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=480, OpenFileDescriptor=752, MaxFileDescriptor=60000, SystemLoadAverage=599, ProcessCount=177, AvailableMemoryMB=5571 2023-07-24 18:10:46,583 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testDefaultNamespaceCreateAndAssign 2023-07-24 18:10:46,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:46,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:46,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:46,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:46,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:46,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:46,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:46,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:46,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:46,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:46,605 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:46,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:46,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:46,611 INFO [RS:4;jenkins-hbase4:37467] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37467%2C1690222246245, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37467,1690222246245, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:10:46,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:46,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:46,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:46,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:46,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:46,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:46,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 97 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:54766 deadline: 1690223446627, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:46,631 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:46,633 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:10:46,633 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:46,636 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:10:46,637 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:10:46,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:46,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:46,641 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:46,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:46,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:46,645 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(180): testDefaultNamespaceCreateAndAssign 2023-07-24 18:10:46,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'default', hbase.rsgroup.name => 'default'} 2023-07-24 18:10:46,655 INFO [RS:4;jenkins-hbase4:37467] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37467,1690222246245/jenkins-hbase4.apache.org%2C37467%2C1690222246245.1690222246612 2023-07-24 18:10:46,655 DEBUG [RS:4;jenkins-hbase4:37467] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK]] 2023-07-24 18:10:46,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=default 2023-07-24 18:10:46,674 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 18:10:46,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 18:10:46,677 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; ModifyNamespaceProcedure, namespace=default in 22 msec 2023-07-24 18:10:46,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:46,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:10:46,693 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:46,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndAssign" procId is: 20 2023-07-24 18:10:46,698 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,699 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:46,700 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:46,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 18:10:46,702 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:46,705 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:46,706 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63 empty. 2023-07-24 18:10:46,706 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:46,706 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-24 18:10:46,729 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:46,730 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 10cc7243baa0869d7351a8c49c419a63, NAME => 'Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:46,757 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:46,757 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 10cc7243baa0869d7351a8c49c419a63, disabling compactions & flushes 2023-07-24 18:10:46,757 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:46,757 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:46,757 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. after waiting 0 ms 2023-07-24 18:10:46,757 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:46,757 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:46,757 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 10cc7243baa0869d7351a8c49c419a63: 2023-07-24 18:10:46,762 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:46,764 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222246764"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222246764"}]},"ts":"1690222246764"} 2023-07-24 18:10:46,767 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:46,769 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:46,769 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222246769"}]},"ts":"1690222246769"} 2023-07-24 18:10:46,771 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-24 18:10:46,776 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:46,777 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:46,777 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:46,777 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:46,777 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 18:10:46,777 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:46,777 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=10cc7243baa0869d7351a8c49c419a63, ASSIGN}] 2023-07-24 18:10:46,780 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=10cc7243baa0869d7351a8c49c419a63, ASSIGN 2023-07-24 18:10:46,781 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=10cc7243baa0869d7351a8c49c419a63, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35913,1690222239741; forceNewPlan=false, retain=false 2023-07-24 18:10:46,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 18:10:46,932 INFO [jenkins-hbase4:34677] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:46,933 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=10cc7243baa0869d7351a8c49c419a63, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:46,933 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222246933"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222246933"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222246933"}]},"ts":"1690222246933"} 2023-07-24 18:10:46,936 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; OpenRegionProcedure 10cc7243baa0869d7351a8c49c419a63, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:47,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 18:10:47,095 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:47,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 10cc7243baa0869d7351a8c49c419a63, NAME => 'Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:47,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:47,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,106 INFO [StoreOpener-10cc7243baa0869d7351a8c49c419a63-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,115 DEBUG [StoreOpener-10cc7243baa0869d7351a8c49c419a63-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63/f 2023-07-24 18:10:47,115 DEBUG [StoreOpener-10cc7243baa0869d7351a8c49c419a63-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63/f 2023-07-24 18:10:47,116 INFO [StoreOpener-10cc7243baa0869d7351a8c49c419a63-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 10cc7243baa0869d7351a8c49c419a63 columnFamilyName f 2023-07-24 18:10:47,117 INFO [StoreOpener-10cc7243baa0869d7351a8c49c419a63-1] regionserver.HStore(310): Store=10cc7243baa0869d7351a8c49c419a63/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:47,124 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,138 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:47,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 10cc7243baa0869d7351a8c49c419a63; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11492382880, jitterRate=0.07031156122684479}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:47,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 10cc7243baa0869d7351a8c49c419a63: 2023-07-24 18:10:47,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63., pid=22, masterSystemTime=1690222247089 2023-07-24 18:10:47,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:47,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:47,160 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=10cc7243baa0869d7351a8c49c419a63, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:47,160 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222247160"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222247160"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222247160"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222247160"}]},"ts":"1690222247160"} 2023-07-24 18:10:47,166 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-24 18:10:47,166 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; OpenRegionProcedure 10cc7243baa0869d7351a8c49c419a63, server=jenkins-hbase4.apache.org,35913,1690222239741 in 227 msec 2023-07-24 18:10:47,168 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-24 18:10:47,170 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=10cc7243baa0869d7351a8c49c419a63, ASSIGN in 389 msec 2023-07-24 18:10:47,175 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:47,175 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222247175"}]},"ts":"1690222247175"} 2023-07-24 18:10:47,178 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-24 18:10:47,189 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=20, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:47,193 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign in 500 msec 2023-07-24 18:10:47,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 18:10:47,309 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndAssign, procId: 20 completed 2023-07-24 18:10:47,310 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:47,315 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:47,318 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58290, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:47,322 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:47,324 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42250, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:47,325 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:47,327 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39298, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:47,327 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:47,329 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51020, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:47,343 INFO [Listener at localhost/44627] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndAssign 2023-07-24 18:10:47,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateAndAssign 2023-07-24 18:10:47,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:10:47,370 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222247370"}]},"ts":"1690222247370"} 2023-07-24 18:10:47,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 18:10:47,373 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-24 18:10:47,377 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testCreateAndAssign to state=DISABLING 2023-07-24 18:10:47,379 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=10cc7243baa0869d7351a8c49c419a63, UNASSIGN}] 2023-07-24 18:10:47,381 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=10cc7243baa0869d7351a8c49c419a63, UNASSIGN 2023-07-24 18:10:47,382 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=10cc7243baa0869d7351a8c49c419a63, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:47,383 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222247382"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222247382"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222247382"}]},"ts":"1690222247382"} 2023-07-24 18:10:47,385 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=24, state=RUNNABLE; CloseRegionProcedure 10cc7243baa0869d7351a8c49c419a63, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:47,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 18:10:47,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 10cc7243baa0869d7351a8c49c419a63, disabling compactions & flushes 2023-07-24 18:10:47,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:47,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:47,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. after waiting 0 ms 2023-07-24 18:10:47,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:47,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:47,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63. 2023-07-24 18:10:47,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 10cc7243baa0869d7351a8c49c419a63: 2023-07-24 18:10:47,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,553 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=10cc7243baa0869d7351a8c49c419a63, regionState=CLOSED 2023-07-24 18:10:47,553 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222247553"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222247553"}]},"ts":"1690222247553"} 2023-07-24 18:10:47,558 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=24 2023-07-24 18:10:47,558 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=24, state=SUCCESS; CloseRegionProcedure 10cc7243baa0869d7351a8c49c419a63, server=jenkins-hbase4.apache.org,35913,1690222239741 in 170 msec 2023-07-24 18:10:47,561 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=23 2023-07-24 18:10:47,561 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=10cc7243baa0869d7351a8c49c419a63, UNASSIGN in 179 msec 2023-07-24 18:10:47,562 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222247562"}]},"ts":"1690222247562"} 2023-07-24 18:10:47,564 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-24 18:10:47,566 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testCreateAndAssign to state=DISABLED 2023-07-24 18:10:47,568 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign in 215 msec 2023-07-24 18:10:47,645 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:10:47,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 18:10:47,676 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndAssign, procId: 23 completed 2023-07-24 18:10:47,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateAndAssign 2023-07-24 18:10:47,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:10:47,700 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=26, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:10:47,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndAssign' from rsgroup 'default' 2023-07-24 18:10:47,703 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=26, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:10:47,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:47,713 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=26 2023-07-24 18:10:47,718 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63/recovered.edits] 2023-07-24 18:10:47,735 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63/recovered.edits/4.seqid 2023-07-24 18:10:47,736 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndAssign/10cc7243baa0869d7351a8c49c419a63 2023-07-24 18:10:47,736 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-24 18:10:47,743 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=26, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:10:47,748 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 18:10:47,750 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:10:47,751 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 18:10:47,751 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:47,751 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 18:10:47,751 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:10:47,751 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 18:10:47,775 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndAssign from hbase:meta 2023-07-24 18:10:47,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=26 2023-07-24 18:10:47,831 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndAssign' descriptor. 2023-07-24 18:10:47,833 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=26, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:10:47,833 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndAssign' from region states. 2023-07-24 18:10:47,834 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222247833"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:47,836 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:47,836 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 10cc7243baa0869d7351a8c49c419a63, NAME => 'Group_testCreateAndAssign,,1690222246686.10cc7243baa0869d7351a8c49c419a63.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:47,836 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndAssign' as deleted. 2023-07-24 18:10:47,836 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222247836"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:47,838 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndAssign state from META 2023-07-24 18:10:47,840 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=26, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:10:47,842 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign in 152 msec 2023-07-24 18:10:48,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=26 2023-07-24 18:10:48,020 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndAssign, procId: 26 completed 2023-07-24 18:10:48,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:48,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:48,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:48,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:48,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:48,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:48,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:48,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:48,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:48,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:48,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:48,039 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:48,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:48,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:48,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:48,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:48,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:48,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:48,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:48,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:48,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:48,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 161 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223448055, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:48,056 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:48,058 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:48,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:48,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:48,059 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:48,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:48,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:48,081 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=499 (was 480) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x485668fb-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x485668fb-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1280172802_17 at /127.0.0.1:60486 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-372597080_17 at /127.0.0.1:60388 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9-prefix:jenkins-hbase4.apache.org,37467,1690222246245 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:44619 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741845_1021, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x485668fb-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1280172802_17 at /127.0.0.1:60392 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1280172802_17 at /127.0.0.1:40082 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741845_1021] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=772 (was 752) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=599 (was 599), ProcessCount=177 (was 177), AvailableMemoryMB=5536 (was 5571) 2023-07-24 18:10:48,103 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=499, OpenFileDescriptor=772, MaxFileDescriptor=60000, SystemLoadAverage=599, ProcessCount=177, AvailableMemoryMB=5536 2023-07-24 18:10:48,104 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testCreateMultiRegion 2023-07-24 18:10:48,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:48,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:48,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:48,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:48,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:48,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:48,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:48,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:48,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:48,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:48,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:48,125 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:48,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:48,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:48,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:48,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:48,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:48,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:48,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:48,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:48,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:48,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 189 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223448146, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:48,147 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:48,149 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:48,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:48,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:48,150 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:48,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:48,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:48,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:48,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:10:48,158 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:48,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateMultiRegion" procId is: 27 2023-07-24 18:10:48,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-24 18:10:48,160 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:48,161 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:48,161 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:48,164 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:48,173 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:48,173 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b 2023-07-24 18:10:48,173 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:48,174 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:48,174 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:48,174 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:48,174 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:48,174 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:48,174 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f empty. 2023-07-24 18:10:48,174 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e empty. 2023-07-24 18:10:48,175 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b empty. 2023-07-24 18:10:48,175 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55 empty. 2023-07-24 18:10:48,175 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:48,175 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:48,175 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf empty. 2023-07-24 18:10:48,175 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197 empty. 2023-07-24 18:10:48,175 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:48,175 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:48,176 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:48,176 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d empty. 2023-07-24 18:10:48,176 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:48,183 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1 empty. 2023-07-24 18:10:48,183 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:48,183 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:48,183 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5 empty. 2023-07-24 18:10:48,183 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd empty. 2023-07-24 18:10:48,183 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b 2023-07-24 18:10:48,184 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:48,184 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:48,184 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:48,184 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-24 18:10:48,205 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:48,208 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => cded9eb3674256077270274766530d6b, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,208 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => f33ab4573a17eaccee0a8a96fbb4b09e, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,208 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => b7eb9c59d6671ee68da297b518c3d69f, NAME => 'Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-24 18:10:48,311 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,312 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing cded9eb3674256077270274766530d6b, disabling compactions & flushes 2023-07-24 18:10:48,312 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:48,312 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:48,312 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. after waiting 0 ms 2023-07-24 18:10:48,312 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:48,312 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:48,312 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for cded9eb3674256077270274766530d6b: 2023-07-24 18:10:48,312 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,312 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing f33ab4573a17eaccee0a8a96fbb4b09e, disabling compactions & flushes 2023-07-24 18:10:48,312 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => d9b9dfe2f03499bc733af95b9e7d2fe1, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,312 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:48,313 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:48,313 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. after waiting 0 ms 2023-07-24 18:10:48,313 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:48,313 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:48,313 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for f33ab4573a17eaccee0a8a96fbb4b09e: 2023-07-24 18:10:48,313 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9f428bb109715eb70d4ca718e4a695f5, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,315 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,316 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing b7eb9c59d6671ee68da297b518c3d69f, disabling compactions & flushes 2023-07-24 18:10:48,316 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:48,316 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:48,316 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. after waiting 0 ms 2023-07-24 18:10:48,316 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:48,316 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:48,316 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for b7eb9c59d6671ee68da297b518c3d69f: 2023-07-24 18:10:48,317 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3db37070443177e7e2d98fa661f48d55, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,356 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing d9b9dfe2f03499bc733af95b9e7d2fe1, disabling compactions & flushes 2023-07-24 18:10:48,357 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:48,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:48,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. after waiting 0 ms 2023-07-24 18:10:48,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:48,357 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:48,357 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for d9b9dfe2f03499bc733af95b9e7d2fe1: 2023-07-24 18:10:48,358 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => f0b3ad268556b841275517b26b1fdacf, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,374 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,374 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing f0b3ad268556b841275517b26b1fdacf, disabling compactions & flushes 2023-07-24 18:10:48,374 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:48,374 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:48,375 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. after waiting 0 ms 2023-07-24 18:10:48,375 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:48,375 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:48,375 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for f0b3ad268556b841275517b26b1fdacf: 2023-07-24 18:10:48,375 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5c6648a7cb76bf9547578b1066176197, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,396 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,396 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 5c6648a7cb76bf9547578b1066176197, disabling compactions & flushes 2023-07-24 18:10:48,396 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:48,396 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:48,396 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. after waiting 0 ms 2023-07-24 18:10:48,396 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:48,396 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:48,396 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 5c6648a7cb76bf9547578b1066176197: 2023-07-24 18:10:48,396 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => e514cdd4cdfd34aeb0d9a95efa3cb7bd, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,416 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,416 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing e514cdd4cdfd34aeb0d9a95efa3cb7bd, disabling compactions & flushes 2023-07-24 18:10:48,416 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:48,416 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:48,416 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. after waiting 0 ms 2023-07-24 18:10:48,416 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:48,416 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:48,417 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for e514cdd4cdfd34aeb0d9a95efa3cb7bd: 2023-07-24 18:10:48,417 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5926c5ea121381a09f35e016b6edea1d, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:48,430 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,431 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 5926c5ea121381a09f35e016b6edea1d, disabling compactions & flushes 2023-07-24 18:10:48,431 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:48,431 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:48,431 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. after waiting 0 ms 2023-07-24 18:10:48,431 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:48,431 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:48,431 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 5926c5ea121381a09f35e016b6edea1d: 2023-07-24 18:10:48,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-24 18:10:48,755 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,755 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,755 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 3db37070443177e7e2d98fa661f48d55, disabling compactions & flushes 2023-07-24 18:10:48,755 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 9f428bb109715eb70d4ca718e4a695f5, disabling compactions & flushes 2023-07-24 18:10:48,755 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:48,755 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:48,755 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:48,755 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:48,755 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. after waiting 0 ms 2023-07-24 18:10:48,755 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. after waiting 0 ms 2023-07-24 18:10:48,756 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:48,756 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:48,756 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 3db37070443177e7e2d98fa661f48d55: 2023-07-24 18:10:48,756 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:48,756 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:48,756 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 9f428bb109715eb70d4ca718e4a695f5: 2023-07-24 18:10:48,771 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:48,775 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690222248154.cded9eb3674256077270274766530d6b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,775 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,775 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-24 18:10:48,776 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,776 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,776 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,776 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,776 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,777 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,777 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248774"}]},"ts":"1690222248774"} 2023-07-24 18:10:48,787 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 10 regions to meta. 2023-07-24 18:10:48,789 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:48,789 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222248789"}]},"ts":"1690222248789"} 2023-07-24 18:10:48,791 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLING in hbase:meta 2023-07-24 18:10:48,805 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:48,806 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:48,806 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:48,806 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:48,806 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 18:10:48,806 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:48,806 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b7eb9c59d6671ee68da297b518c3d69f, ASSIGN}, {pid=29, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cded9eb3674256077270274766530d6b, ASSIGN}, {pid=30, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33ab4573a17eaccee0a8a96fbb4b09e, ASSIGN}, {pid=31, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=d9b9dfe2f03499bc733af95b9e7d2fe1, ASSIGN}, {pid=32, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f428bb109715eb70d4ca718e4a695f5, ASSIGN}, {pid=33, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3db37070443177e7e2d98fa661f48d55, ASSIGN}, {pid=34, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f0b3ad268556b841275517b26b1fdacf, ASSIGN}, {pid=35, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c6648a7cb76bf9547578b1066176197, ASSIGN}, {pid=36, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e514cdd4cdfd34aeb0d9a95efa3cb7bd, ASSIGN}, {pid=37, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5926c5ea121381a09f35e016b6edea1d, ASSIGN}] 2023-07-24 18:10:48,810 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cded9eb3674256077270274766530d6b, ASSIGN 2023-07-24 18:10:48,810 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b7eb9c59d6671ee68da297b518c3d69f, ASSIGN 2023-07-24 18:10:48,813 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33ab4573a17eaccee0a8a96fbb4b09e, ASSIGN 2023-07-24 18:10:48,813 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=d9b9dfe2f03499bc733af95b9e7d2fe1, ASSIGN 2023-07-24 18:10:48,814 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=29, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cded9eb3674256077270274766530d6b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37467,1690222246245; forceNewPlan=false, retain=false 2023-07-24 18:10:48,815 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=28, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b7eb9c59d6671ee68da297b518c3d69f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43449,1690222239527; forceNewPlan=false, retain=false 2023-07-24 18:10:48,815 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=30, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33ab4573a17eaccee0a8a96fbb4b09e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41915,1690222243305; forceNewPlan=false, retain=false 2023-07-24 18:10:48,815 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=31, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=d9b9dfe2f03499bc733af95b9e7d2fe1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35913,1690222239741; forceNewPlan=false, retain=false 2023-07-24 18:10:48,816 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=36, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e514cdd4cdfd34aeb0d9a95efa3cb7bd, ASSIGN 2023-07-24 18:10:48,816 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=37, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5926c5ea121381a09f35e016b6edea1d, ASSIGN 2023-07-24 18:10:48,817 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f0b3ad268556b841275517b26b1fdacf, ASSIGN 2023-07-24 18:10:48,817 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c6648a7cb76bf9547578b1066176197, ASSIGN 2023-07-24 18:10:48,817 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3db37070443177e7e2d98fa661f48d55, ASSIGN 2023-07-24 18:10:48,818 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=36, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e514cdd4cdfd34aeb0d9a95efa3cb7bd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43449,1690222239527; forceNewPlan=false, retain=false 2023-07-24 18:10:48,819 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=35, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c6648a7cb76bf9547578b1066176197, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41915,1690222243305; forceNewPlan=false, retain=false 2023-07-24 18:10:48,820 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=34, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f0b3ad268556b841275517b26b1fdacf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37467,1690222246245; forceNewPlan=false, retain=false 2023-07-24 18:10:48,820 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=37, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5926c5ea121381a09f35e016b6edea1d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35913,1690222239741; forceNewPlan=false, retain=false 2023-07-24 18:10:48,820 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=33, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3db37070443177e7e2d98fa661f48d55, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37467,1690222246245; forceNewPlan=false, retain=false 2023-07-24 18:10:48,821 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f428bb109715eb70d4ca718e4a695f5, ASSIGN 2023-07-24 18:10:48,822 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, ppid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f428bb109715eb70d4ca718e4a695f5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41915,1690222243305; forceNewPlan=false, retain=false 2023-07-24 18:10:48,965 INFO [jenkins-hbase4:34677] balancer.BaseLoadBalancer(1545): Reassigned 10 regions. 10 retained the pre-restart assignment. 2023-07-24 18:10:48,971 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=e514cdd4cdfd34aeb0d9a95efa3cb7bd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:48,971 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=f33ab4573a17eaccee0a8a96fbb4b09e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:48,971 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=5c6648a7cb76bf9547578b1066176197, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:48,972 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248971"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248971"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248971"}]},"ts":"1690222248971"} 2023-07-24 18:10:48,972 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248971"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248971"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248971"}]},"ts":"1690222248971"} 2023-07-24 18:10:48,971 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=9f428bb109715eb70d4ca718e4a695f5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:48,971 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=b7eb9c59d6671ee68da297b518c3d69f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:48,972 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248971"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248971"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248971"}]},"ts":"1690222248971"} 2023-07-24 18:10:48,972 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248971"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248971"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248971"}]},"ts":"1690222248971"} 2023-07-24 18:10:48,972 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222248971"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248971"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248971"}]},"ts":"1690222248971"} 2023-07-24 18:10:48,976 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=36, state=RUNNABLE; OpenRegionProcedure e514cdd4cdfd34aeb0d9a95efa3cb7bd, server=jenkins-hbase4.apache.org,43449,1690222239527}] 2023-07-24 18:10:48,977 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=30, state=RUNNABLE; OpenRegionProcedure f33ab4573a17eaccee0a8a96fbb4b09e, server=jenkins-hbase4.apache.org,41915,1690222243305}] 2023-07-24 18:10:48,981 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=35, state=RUNNABLE; OpenRegionProcedure 5c6648a7cb76bf9547578b1066176197, server=jenkins-hbase4.apache.org,41915,1690222243305}] 2023-07-24 18:10:48,982 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=f0b3ad268556b841275517b26b1fdacf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:48,982 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248982"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248982"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248982"}]},"ts":"1690222248982"} 2023-07-24 18:10:48,982 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=32, state=RUNNABLE; OpenRegionProcedure 9f428bb109715eb70d4ca718e4a695f5, server=jenkins-hbase4.apache.org,41915,1690222243305}] 2023-07-24 18:10:48,984 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=3db37070443177e7e2d98fa661f48d55, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:48,984 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248983"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248983"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248983"}]},"ts":"1690222248983"} 2023-07-24 18:10:48,984 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=28, state=RUNNABLE; OpenRegionProcedure b7eb9c59d6671ee68da297b518c3d69f, server=jenkins-hbase4.apache.org,43449,1690222239527}] 2023-07-24 18:10:48,986 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=34, state=RUNNABLE; OpenRegionProcedure f0b3ad268556b841275517b26b1fdacf, server=jenkins-hbase4.apache.org,37467,1690222246245}] 2023-07-24 18:10:48,986 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=cded9eb3674256077270274766530d6b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:48,986 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690222248154.cded9eb3674256077270274766530d6b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248986"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248986"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248986"}]},"ts":"1690222248986"} 2023-07-24 18:10:48,987 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=33, state=RUNNABLE; OpenRegionProcedure 3db37070443177e7e2d98fa661f48d55, server=jenkins-hbase4.apache.org,37467,1690222246245}] 2023-07-24 18:10:48,988 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=29, state=RUNNABLE; OpenRegionProcedure cded9eb3674256077270274766530d6b, server=jenkins-hbase4.apache.org,37467,1690222246245}] 2023-07-24 18:10:48,991 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=5926c5ea121381a09f35e016b6edea1d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:48,991 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222248991"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248991"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248991"}]},"ts":"1690222248991"} 2023-07-24 18:10:48,992 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=d9b9dfe2f03499bc733af95b9e7d2fe1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:48,992 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222248992"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248992"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248992"}]},"ts":"1690222248992"} 2023-07-24 18:10:48,994 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=37, state=RUNNABLE; OpenRegionProcedure 5926c5ea121381a09f35e016b6edea1d, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:48,996 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=31, state=RUNNABLE; OpenRegionProcedure d9b9dfe2f03499bc733af95b9e7d2fe1, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:49,134 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:49,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e514cdd4cdfd34aeb0d9a95efa3cb7bd, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'} 2023-07-24 18:10:49,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,135 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:49,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9f428bb109715eb70d4ca718e4a695f5, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'} 2023-07-24 18:10:49,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,138 INFO [StoreOpener-9f428bb109715eb70d4ca718e4a695f5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,138 INFO [StoreOpener-e514cdd4cdfd34aeb0d9a95efa3cb7bd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,140 DEBUG [StoreOpener-9f428bb109715eb70d4ca718e4a695f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5/f 2023-07-24 18:10:49,140 DEBUG [StoreOpener-9f428bb109715eb70d4ca718e4a695f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5/f 2023-07-24 18:10:49,140 INFO [StoreOpener-9f428bb109715eb70d4ca718e4a695f5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9f428bb109715eb70d4ca718e4a695f5 columnFamilyName f 2023-07-24 18:10:49,140 DEBUG [StoreOpener-e514cdd4cdfd34aeb0d9a95efa3cb7bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd/f 2023-07-24 18:10:49,140 DEBUG [StoreOpener-e514cdd4cdfd34aeb0d9a95efa3cb7bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd/f 2023-07-24 18:10:49,141 INFO [StoreOpener-9f428bb109715eb70d4ca718e4a695f5-1] regionserver.HStore(310): Store=9f428bb109715eb70d4ca718e4a695f5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,141 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:49,141 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:49,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,142 INFO [StoreOpener-e514cdd4cdfd34aeb0d9a95efa3cb7bd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e514cdd4cdfd34aeb0d9a95efa3cb7bd columnFamilyName f 2023-07-24 18:10:49,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,143 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42252, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:49,143 INFO [StoreOpener-e514cdd4cdfd34aeb0d9a95efa3cb7bd-1] regionserver.HStore(310): Store=e514cdd4cdfd34aeb0d9a95efa3cb7bd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,145 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,148 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:49,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f0b3ad268556b841275517b26b1fdacf, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'} 2023-07-24 18:10:49,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,152 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,152 INFO [StoreOpener-f0b3ad268556b841275517b26b1fdacf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,153 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9f428bb109715eb70d4ca718e4a695f5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10333006240, jitterRate=-0.03766380250453949}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9f428bb109715eb70d4ca718e4a695f5: 2023-07-24 18:10:49,155 DEBUG [StoreOpener-f0b3ad268556b841275517b26b1fdacf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf/f 2023-07-24 18:10:49,155 DEBUG [StoreOpener-f0b3ad268556b841275517b26b1fdacf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf/f 2023-07-24 18:10:49,156 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5., pid=41, masterSystemTime=1690222249131 2023-07-24 18:10:49,156 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:49,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5926c5ea121381a09f35e016b6edea1d, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''} 2023-07-24 18:10:49,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,157 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,158 INFO [StoreOpener-f0b3ad268556b841275517b26b1fdacf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f0b3ad268556b841275517b26b1fdacf columnFamilyName f 2023-07-24 18:10:49,159 INFO [StoreOpener-f0b3ad268556b841275517b26b1fdacf-1] regionserver.HStore(310): Store=f0b3ad268556b841275517b26b1fdacf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:49,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:49,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:49,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f33ab4573a17eaccee0a8a96fbb4b09e, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'} 2023-07-24 18:10:49,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,160 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=9f428bb109715eb70d4ca718e4a695f5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:49,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,160 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249160"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249160"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249160"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249160"}]},"ts":"1690222249160"} 2023-07-24 18:10:49,163 INFO [StoreOpener-5926c5ea121381a09f35e016b6edea1d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e514cdd4cdfd34aeb0d9a95efa3cb7bd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9569005280, jitterRate=-0.10881693661212921}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e514cdd4cdfd34aeb0d9a95efa3cb7bd: 2023-07-24 18:10:49,165 DEBUG [StoreOpener-5926c5ea121381a09f35e016b6edea1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d/f 2023-07-24 18:10:49,165 DEBUG [StoreOpener-5926c5ea121381a09f35e016b6edea1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d/f 2023-07-24 18:10:49,166 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd., pid=38, masterSystemTime=1690222249130 2023-07-24 18:10:49,166 INFO [StoreOpener-5926c5ea121381a09f35e016b6edea1d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5926c5ea121381a09f35e016b6edea1d columnFamilyName f 2023-07-24 18:10:49,167 INFO [StoreOpener-5926c5ea121381a09f35e016b6edea1d-1] regionserver.HStore(310): Store=5926c5ea121381a09f35e016b6edea1d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:49,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:49,168 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:49,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7eb9c59d6671ee68da297b518c3d69f, NAME => 'Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'} 2023-07-24 18:10:49,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,169 INFO [StoreOpener-f33ab4573a17eaccee0a8a96fbb4b09e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,169 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=32 2023-07-24 18:10:49,169 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=e514cdd4cdfd34aeb0d9a95efa3cb7bd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:49,170 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249169"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249169"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249169"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249169"}]},"ts":"1690222249169"} 2023-07-24 18:10:49,169 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=32, state=SUCCESS; OpenRegionProcedure 9f428bb109715eb70d4ca718e4a695f5, server=jenkins-hbase4.apache.org,41915,1690222243305 in 181 msec 2023-07-24 18:10:49,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,172 INFO [StoreOpener-b7eb9c59d6671ee68da297b518c3d69f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,172 DEBUG [StoreOpener-f33ab4573a17eaccee0a8a96fbb4b09e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e/f 2023-07-24 18:10:49,172 DEBUG [StoreOpener-f33ab4573a17eaccee0a8a96fbb4b09e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e/f 2023-07-24 18:10:49,172 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f428bb109715eb70d4ca718e4a695f5, ASSIGN in 364 msec 2023-07-24 18:10:49,172 INFO [StoreOpener-f33ab4573a17eaccee0a8a96fbb4b09e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f33ab4573a17eaccee0a8a96fbb4b09e columnFamilyName f 2023-07-24 18:10:49,174 INFO [StoreOpener-f33ab4573a17eaccee0a8a96fbb4b09e-1] regionserver.HStore(310): Store=f33ab4573a17eaccee0a8a96fbb4b09e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,174 DEBUG [StoreOpener-b7eb9c59d6671ee68da297b518c3d69f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f/f 2023-07-24 18:10:49,174 DEBUG [StoreOpener-b7eb9c59d6671ee68da297b518c3d69f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f/f 2023-07-24 18:10:49,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,174 INFO [StoreOpener-b7eb9c59d6671ee68da297b518c3d69f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7eb9c59d6671ee68da297b518c3d69f columnFamilyName f 2023-07-24 18:10:49,174 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,175 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=36 2023-07-24 18:10:49,175 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=36, state=SUCCESS; OpenRegionProcedure e514cdd4cdfd34aeb0d9a95efa3cb7bd, server=jenkins-hbase4.apache.org,43449,1690222239527 in 196 msec 2023-07-24 18:10:49,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,175 INFO [StoreOpener-b7eb9c59d6671ee68da297b518c3d69f-1] regionserver.HStore(310): Store=b7eb9c59d6671ee68da297b518c3d69f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,176 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f0b3ad268556b841275517b26b1fdacf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10138771840, jitterRate=-0.05575329065322876}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f0b3ad268556b841275517b26b1fdacf: 2023-07-24 18:10:49,177 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e514cdd4cdfd34aeb0d9a95efa3cb7bd, ASSIGN in 369 msec 2023-07-24 18:10:49,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf., pid=43, masterSystemTime=1690222249141 2023-07-24 18:10:49,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,182 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:49,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:49,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:49,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cded9eb3674256077270274766530d6b, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('} 2023-07-24 18:10:49,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5926c5ea121381a09f35e016b6edea1d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11999922400, jitterRate=0.1175798624753952}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5926c5ea121381a09f35e016b6edea1d: 2023-07-24 18:10:49,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,185 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d., pid=46, masterSystemTime=1690222249148 2023-07-24 18:10:49,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,187 INFO [StoreOpener-cded9eb3674256077270274766530d6b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,187 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=f0b3ad268556b841275517b26b1fdacf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:49,188 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249187"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249187"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249187"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249187"}]},"ts":"1690222249187"} 2023-07-24 18:10:49,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:49,188 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:49,188 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:49,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d9b9dfe2f03499bc733af95b9e7d2fe1, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'} 2023-07-24 18:10:49,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,189 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=37 updating hbase:meta row=5926c5ea121381a09f35e016b6edea1d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:49,190 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222249189"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249189"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249189"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249189"}]},"ts":"1690222249189"} 2023-07-24 18:10:49,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,198 DEBUG [StoreOpener-cded9eb3674256077270274766530d6b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b/f 2023-07-24 18:10:49,198 DEBUG [StoreOpener-cded9eb3674256077270274766530d6b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b/f 2023-07-24 18:10:49,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f33ab4573a17eaccee0a8a96fbb4b09e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10136445280, jitterRate=-0.0559699684381485}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f33ab4573a17eaccee0a8a96fbb4b09e: 2023-07-24 18:10:49,199 INFO [StoreOpener-cded9eb3674256077270274766530d6b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cded9eb3674256077270274766530d6b columnFamilyName f 2023-07-24 18:10:49,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,199 INFO [StoreOpener-d9b9dfe2f03499bc733af95b9e7d2fe1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,200 INFO [StoreOpener-cded9eb3674256077270274766530d6b-1] regionserver.HStore(310): Store=cded9eb3674256077270274766530d6b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e., pid=39, masterSystemTime=1690222249131 2023-07-24 18:10:49,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,202 DEBUG [StoreOpener-d9b9dfe2f03499bc733af95b9e7d2fe1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1/f 2023-07-24 18:10:49,202 DEBUG [StoreOpener-d9b9dfe2f03499bc733af95b9e7d2fe1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1/f 2023-07-24 18:10:49,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:49,203 INFO [StoreOpener-d9b9dfe2f03499bc733af95b9e7d2fe1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d9b9dfe2f03499bc733af95b9e7d2fe1 columnFamilyName f 2023-07-24 18:10:49,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,203 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:49,203 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:49,203 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=34 2023-07-24 18:10:49,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c6648a7cb76bf9547578b1066176197, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'} 2023-07-24 18:10:49,204 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=34, state=SUCCESS; OpenRegionProcedure f0b3ad268556b841275517b26b1fdacf, server=jenkins-hbase4.apache.org,37467,1690222246245 in 212 msec 2023-07-24 18:10:49,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7eb9c59d6671ee68da297b518c3d69f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10540573600, jitterRate=-0.0183325856924057}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,204 INFO [StoreOpener-d9b9dfe2f03499bc733af95b9e7d2fe1-1] regionserver.HStore(310): Store=d9b9dfe2f03499bc733af95b9e7d2fe1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,204 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=f33ab4573a17eaccee0a8a96fbb4b09e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:49,205 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249204"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249204"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249204"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249204"}]},"ts":"1690222249204"} 2023-07-24 18:10:49,204 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=37 2023-07-24 18:10:49,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7eb9c59d6671ee68da297b518c3d69f: 2023-07-24 18:10:49,205 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=37, state=SUCCESS; OpenRegionProcedure 5926c5ea121381a09f35e016b6edea1d, server=jenkins-hbase4.apache.org,35913,1690222239741 in 206 msec 2023-07-24 18:10:49,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,206 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f., pid=42, masterSystemTime=1690222249130 2023-07-24 18:10:49,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,207 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f0b3ad268556b841275517b26b1fdacf, ASSIGN in 397 msec 2023-07-24 18:10:49,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:49,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:49,208 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5926c5ea121381a09f35e016b6edea1d, ASSIGN in 399 msec 2023-07-24 18:10:49,209 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=b7eb9c59d6671ee68da297b518c3d69f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:49,210 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222249209"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249209"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249209"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249209"}]},"ts":"1690222249209"} 2023-07-24 18:10:49,211 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=30 2023-07-24 18:10:49,211 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=30, state=SUCCESS; OpenRegionProcedure f33ab4573a17eaccee0a8a96fbb4b09e, server=jenkins-hbase4.apache.org,41915,1690222243305 in 230 msec 2023-07-24 18:10:49,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,213 INFO [StoreOpener-5c6648a7cb76bf9547578b1066176197-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,215 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33ab4573a17eaccee0a8a96fbb4b09e, ASSIGN in 405 msec 2023-07-24 18:10:49,216 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=28 2023-07-24 18:10:49,216 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=28, state=SUCCESS; OpenRegionProcedure b7eb9c59d6671ee68da297b518c3d69f, server=jenkins-hbase4.apache.org,43449,1690222239527 in 228 msec 2023-07-24 18:10:49,218 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b7eb9c59d6671ee68da297b518c3d69f, ASSIGN in 410 msec 2023-07-24 18:10:49,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d9b9dfe2f03499bc733af95b9e7d2fe1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10480183520, jitterRate=-0.023956850171089172}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,219 DEBUG [StoreOpener-5c6648a7cb76bf9547578b1066176197-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197/f 2023-07-24 18:10:49,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d9b9dfe2f03499bc733af95b9e7d2fe1: 2023-07-24 18:10:49,220 DEBUG [StoreOpener-5c6648a7cb76bf9547578b1066176197-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197/f 2023-07-24 18:10:49,220 INFO [StoreOpener-5c6648a7cb76bf9547578b1066176197-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c6648a7cb76bf9547578b1066176197 columnFamilyName f 2023-07-24 18:10:49,220 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cded9eb3674256077270274766530d6b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10913962400, jitterRate=0.016441956162452698}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cded9eb3674256077270274766530d6b: 2023-07-24 18:10:49,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1., pid=47, masterSystemTime=1690222249148 2023-07-24 18:10:49,221 INFO [StoreOpener-5c6648a7cb76bf9547578b1066176197-1] regionserver.HStore(310): Store=5c6648a7cb76bf9547578b1066176197/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b., pid=45, masterSystemTime=1690222249141 2023-07-24 18:10:49,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:49,223 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:49,224 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=d9b9dfe2f03499bc733af95b9e7d2fe1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:49,224 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249224"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249224"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249224"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249224"}]},"ts":"1690222249224"} 2023-07-24 18:10:49,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:49,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:49,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:49,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3db37070443177e7e2d98fa661f48d55, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'} 2023-07-24 18:10:49,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,227 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=cded9eb3674256077270274766530d6b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:49,227 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690222248154.cded9eb3674256077270274766530d6b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249227"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249227"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249227"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249227"}]},"ts":"1690222249227"} 2023-07-24 18:10:49,229 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,231 INFO [StoreOpener-3db37070443177e7e2d98fa661f48d55-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,231 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=31 2023-07-24 18:10:49,231 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=31, state=SUCCESS; OpenRegionProcedure d9b9dfe2f03499bc733af95b9e7d2fe1, server=jenkins-hbase4.apache.org,35913,1690222239741 in 231 msec 2023-07-24 18:10:49,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,233 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=29 2023-07-24 18:10:49,233 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=d9b9dfe2f03499bc733af95b9e7d2fe1, ASSIGN in 425 msec 2023-07-24 18:10:49,233 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=29, state=SUCCESS; OpenRegionProcedure cded9eb3674256077270274766530d6b, server=jenkins-hbase4.apache.org,37467,1690222246245 in 241 msec 2023-07-24 18:10:49,233 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5c6648a7cb76bf9547578b1066176197; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10569545440, jitterRate=-0.01563437283039093}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5c6648a7cb76bf9547578b1066176197: 2023-07-24 18:10:49,235 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197., pid=40, masterSystemTime=1690222249131 2023-07-24 18:10:49,235 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cded9eb3674256077270274766530d6b, ASSIGN in 427 msec 2023-07-24 18:10:49,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:49,237 DEBUG [StoreOpener-3db37070443177e7e2d98fa661f48d55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55/f 2023-07-24 18:10:49,237 DEBUG [StoreOpener-3db37070443177e7e2d98fa661f48d55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55/f 2023-07-24 18:10:49,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:49,237 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=5c6648a7cb76bf9547578b1066176197, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:49,238 INFO [StoreOpener-3db37070443177e7e2d98fa661f48d55-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3db37070443177e7e2d98fa661f48d55 columnFamilyName f 2023-07-24 18:10:49,238 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249237"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249237"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249237"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249237"}]},"ts":"1690222249237"} 2023-07-24 18:10:49,238 INFO [StoreOpener-3db37070443177e7e2d98fa661f48d55-1] regionserver.HStore(310): Store=3db37070443177e7e2d98fa661f48d55/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=35 2023-07-24 18:10:49,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=35, state=SUCCESS; OpenRegionProcedure 5c6648a7cb76bf9547578b1066176197, server=jenkins-hbase4.apache.org,41915,1690222243305 in 258 msec 2023-07-24 18:10:49,243 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c6648a7cb76bf9547578b1066176197, ASSIGN in 436 msec 2023-07-24 18:10:49,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3db37070443177e7e2d98fa661f48d55; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10132631680, jitterRate=-0.05632513761520386}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3db37070443177e7e2d98fa661f48d55: 2023-07-24 18:10:49,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55., pid=44, masterSystemTime=1690222249141 2023-07-24 18:10:49,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:49,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:49,249 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=3db37070443177e7e2d98fa661f48d55, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:49,249 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249249"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249249"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249249"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249249"}]},"ts":"1690222249249"} 2023-07-24 18:10:49,253 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=33 2023-07-24 18:10:49,253 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=33, state=SUCCESS; OpenRegionProcedure 3db37070443177e7e2d98fa661f48d55, server=jenkins-hbase4.apache.org,37467,1690222246245 in 264 msec 2023-07-24 18:10:49,255 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=27 2023-07-24 18:10:49,255 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3db37070443177e7e2d98fa661f48d55, ASSIGN in 447 msec 2023-07-24 18:10:49,256 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:49,256 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222249256"}]},"ts":"1690222249256"} 2023-07-24 18:10:49,258 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLED in hbase:meta 2023-07-24 18:10:49,260 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=27, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:49,263 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion in 1.1060 sec 2023-07-24 18:10:49,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=27 2023-07-24 18:10:49,277 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateMultiRegion, procId: 27 completed 2023-07-24 18:10:49,278 DEBUG [Listener at localhost/44627] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateMultiRegion get assigned. Timeout = 60000ms 2023-07-24 18:10:49,279 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:49,281 WARN [RPCClient-NioEventLoopGroup-6-1] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:34741 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:34741 Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716) at org.apache.hbase.thirdparty.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:10:49,283 DEBUG [RPCClient-NioEventLoopGroup-6-1] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:34741 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:34741 2023-07-24 18:10:49,387 DEBUG [hconnection-0x1e045fb3-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:49,389 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39302, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:49,396 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateMultiRegion assigned to meta. Checking AM states. 2023-07-24 18:10:49,397 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:49,398 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateMultiRegion assigned. 2023-07-24 18:10:49,402 INFO [Listener at localhost/44627] client.HBaseAdmin$15(890): Started disable of Group_testCreateMultiRegion 2023-07-24 18:10:49,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateMultiRegion 2023-07-24 18:10:49,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=48, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:10:49,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=48 2023-07-24 18:10:49,408 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222249408"}]},"ts":"1690222249408"} 2023-07-24 18:10:49,410 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLING in hbase:meta 2023-07-24 18:10:49,413 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testCreateMultiRegion to state=DISABLING 2023-07-24 18:10:49,419 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cded9eb3674256077270274766530d6b, UNASSIGN}, {pid=50, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33ab4573a17eaccee0a8a96fbb4b09e, UNASSIGN}, {pid=51, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=d9b9dfe2f03499bc733af95b9e7d2fe1, UNASSIGN}, {pid=52, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f428bb109715eb70d4ca718e4a695f5, UNASSIGN}, {pid=53, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3db37070443177e7e2d98fa661f48d55, UNASSIGN}, {pid=54, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f0b3ad268556b841275517b26b1fdacf, UNASSIGN}, {pid=55, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c6648a7cb76bf9547578b1066176197, UNASSIGN}, {pid=56, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e514cdd4cdfd34aeb0d9a95efa3cb7bd, UNASSIGN}, {pid=57, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5926c5ea121381a09f35e016b6edea1d, UNASSIGN}, {pid=58, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b7eb9c59d6671ee68da297b518c3d69f, UNASSIGN}] 2023-07-24 18:10:49,421 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cded9eb3674256077270274766530d6b, UNASSIGN 2023-07-24 18:10:49,421 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33ab4573a17eaccee0a8a96fbb4b09e, UNASSIGN 2023-07-24 18:10:49,422 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=d9b9dfe2f03499bc733af95b9e7d2fe1, UNASSIGN 2023-07-24 18:10:49,422 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5926c5ea121381a09f35e016b6edea1d, UNASSIGN 2023-07-24 18:10:49,422 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b7eb9c59d6671ee68da297b518c3d69f, UNASSIGN 2023-07-24 18:10:49,426 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=f33ab4573a17eaccee0a8a96fbb4b09e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:49,426 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=cded9eb3674256077270274766530d6b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:49,426 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249426"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249426"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249426"}]},"ts":"1690222249426"} 2023-07-24 18:10:49,426 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690222248154.cded9eb3674256077270274766530d6b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249426"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249426"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249426"}]},"ts":"1690222249426"} 2023-07-24 18:10:49,427 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=d9b9dfe2f03499bc733af95b9e7d2fe1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:49,427 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249427"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249427"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249427"}]},"ts":"1690222249427"} 2023-07-24 18:10:49,427 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=5926c5ea121381a09f35e016b6edea1d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:49,427 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=b7eb9c59d6671ee68da297b518c3d69f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:49,428 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222249427"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249427"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249427"}]},"ts":"1690222249427"} 2023-07-24 18:10:49,428 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222249427"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249427"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249427"}]},"ts":"1690222249427"} 2023-07-24 18:10:49,429 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=49, state=RUNNABLE; CloseRegionProcedure cded9eb3674256077270274766530d6b, server=jenkins-hbase4.apache.org,37467,1690222246245}] 2023-07-24 18:10:49,430 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=50, state=RUNNABLE; CloseRegionProcedure f33ab4573a17eaccee0a8a96fbb4b09e, server=jenkins-hbase4.apache.org,41915,1690222243305}] 2023-07-24 18:10:49,435 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=51, state=RUNNABLE; CloseRegionProcedure d9b9dfe2f03499bc733af95b9e7d2fe1, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:49,437 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=57, state=RUNNABLE; CloseRegionProcedure 5926c5ea121381a09f35e016b6edea1d, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:49,437 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e514cdd4cdfd34aeb0d9a95efa3cb7bd, UNASSIGN 2023-07-24 18:10:49,439 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=58, state=RUNNABLE; CloseRegionProcedure b7eb9c59d6671ee68da297b518c3d69f, server=jenkins-hbase4.apache.org,43449,1690222239527}] 2023-07-24 18:10:49,439 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c6648a7cb76bf9547578b1066176197, UNASSIGN 2023-07-24 18:10:49,439 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=e514cdd4cdfd34aeb0d9a95efa3cb7bd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:49,440 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249439"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249439"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249439"}]},"ts":"1690222249439"} 2023-07-24 18:10:49,441 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=5c6648a7cb76bf9547578b1066176197, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:49,442 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249441"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249441"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249441"}]},"ts":"1690222249441"} 2023-07-24 18:10:49,444 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f0b3ad268556b841275517b26b1fdacf, UNASSIGN 2023-07-24 18:10:49,444 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=56, state=RUNNABLE; CloseRegionProcedure e514cdd4cdfd34aeb0d9a95efa3cb7bd, server=jenkins-hbase4.apache.org,43449,1690222239527}] 2023-07-24 18:10:49,445 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=f0b3ad268556b841275517b26b1fdacf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:49,445 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=55, state=RUNNABLE; CloseRegionProcedure 5c6648a7cb76bf9547578b1066176197, server=jenkins-hbase4.apache.org,41915,1690222243305}] 2023-07-24 18:10:49,445 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249445"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249445"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249445"}]},"ts":"1690222249445"} 2023-07-24 18:10:49,445 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3db37070443177e7e2d98fa661f48d55, UNASSIGN 2023-07-24 18:10:49,446 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=48, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f428bb109715eb70d4ca718e4a695f5, UNASSIGN 2023-07-24 18:10:49,448 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=3db37070443177e7e2d98fa661f48d55, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:49,448 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249448"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249448"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249448"}]},"ts":"1690222249448"} 2023-07-24 18:10:49,448 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=54, state=RUNNABLE; CloseRegionProcedure f0b3ad268556b841275517b26b1fdacf, server=jenkins-hbase4.apache.org,37467,1690222246245}] 2023-07-24 18:10:49,449 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=9f428bb109715eb70d4ca718e4a695f5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:49,450 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249449"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249449"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249449"}]},"ts":"1690222249449"} 2023-07-24 18:10:49,451 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=53, state=RUNNABLE; CloseRegionProcedure 3db37070443177e7e2d98fa661f48d55, server=jenkins-hbase4.apache.org,37467,1690222246245}] 2023-07-24 18:10:49,452 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=52, state=RUNNABLE; CloseRegionProcedure 9f428bb109715eb70d4ca718e4a695f5, server=jenkins-hbase4.apache.org,41915,1690222243305}] 2023-07-24 18:10:49,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=48 2023-07-24 18:10:49,584 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testCreateMultiRegion' 2023-07-24 18:10:49,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f0b3ad268556b841275517b26b1fdacf, disabling compactions & flushes 2023-07-24 18:10:49,587 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:49,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:49,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. after waiting 0 ms 2023-07-24 18:10:49,587 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:49,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5926c5ea121381a09f35e016b6edea1d, disabling compactions & flushes 2023-07-24 18:10:49,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:49,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:49,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. after waiting 0 ms 2023-07-24 18:10:49,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:49,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5c6648a7cb76bf9547578b1066176197, disabling compactions & flushes 2023-07-24 18:10:49,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:49,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:49,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. after waiting 0 ms 2023-07-24 18:10:49,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:49,603 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf. 2023-07-24 18:10:49,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f0b3ad268556b841275517b26b1fdacf: 2023-07-24 18:10:49,608 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d. 2023-07-24 18:10:49,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5926c5ea121381a09f35e016b6edea1d: 2023-07-24 18:10:49,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7eb9c59d6671ee68da297b518c3d69f, disabling compactions & flushes 2023-07-24 18:10:49,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:49,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:49,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. after waiting 0 ms 2023-07-24 18:10:49,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:49,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d9b9dfe2f03499bc733af95b9e7d2fe1, disabling compactions & flushes 2023-07-24 18:10:49,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:49,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:49,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. after waiting 0 ms 2023-07-24 18:10:49,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:49,613 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=5926c5ea121381a09f35e016b6edea1d, regionState=CLOSED 2023-07-24 18:10:49,613 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222249613"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249613"}]},"ts":"1690222249613"} 2023-07-24 18:10:49,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,615 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=f0b3ad268556b841275517b26b1fdacf, regionState=CLOSED 2023-07-24 18:10:49,616 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249615"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249615"}]},"ts":"1690222249615"} 2023-07-24 18:10:49,619 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=57 2023-07-24 18:10:49,619 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=57, state=SUCCESS; CloseRegionProcedure 5926c5ea121381a09f35e016b6edea1d, server=jenkins-hbase4.apache.org,35913,1690222239741 in 179 msec 2023-07-24 18:10:49,622 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=54 2023-07-24 18:10:49,622 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5926c5ea121381a09f35e016b6edea1d, UNASSIGN in 200 msec 2023-07-24 18:10:49,622 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=54, state=SUCCESS; CloseRegionProcedure f0b3ad268556b841275517b26b1fdacf, server=jenkins-hbase4.apache.org,37467,1690222246245 in 169 msec 2023-07-24 18:10:49,624 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f0b3ad268556b841275517b26b1fdacf, UNASSIGN in 203 msec 2023-07-24 18:10:49,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3db37070443177e7e2d98fa661f48d55, disabling compactions & flushes 2023-07-24 18:10:49,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:49,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:49,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. after waiting 0 ms 2023-07-24 18:10:49,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:49,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f. 2023-07-24 18:10:49,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7eb9c59d6671ee68da297b518c3d69f: 2023-07-24 18:10:49,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197. 2023-07-24 18:10:49,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5c6648a7cb76bf9547578b1066176197: 2023-07-24 18:10:49,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1. 2023-07-24 18:10:49,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d9b9dfe2f03499bc733af95b9e7d2fe1: 2023-07-24 18:10:49,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,640 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,640 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=b7eb9c59d6671ee68da297b518c3d69f, regionState=CLOSED 2023-07-24 18:10:49,641 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222249640"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249640"}]},"ts":"1690222249640"} 2023-07-24 18:10:49,641 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,641 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,642 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=5c6648a7cb76bf9547578b1066176197, regionState=CLOSED 2023-07-24 18:10:49,642 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249642"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249642"}]},"ts":"1690222249642"} 2023-07-24 18:10:49,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,644 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=d9b9dfe2f03499bc733af95b9e7d2fe1, regionState=CLOSED 2023-07-24 18:10:49,644 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249644"}]},"ts":"1690222249644"} 2023-07-24 18:10:49,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e514cdd4cdfd34aeb0d9a95efa3cb7bd, disabling compactions & flushes 2023-07-24 18:10:49,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f33ab4573a17eaccee0a8a96fbb4b09e, disabling compactions & flushes 2023-07-24 18:10:49,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:49,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:49,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. after waiting 0 ms 2023-07-24 18:10:49,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:49,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:49,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:49,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. after waiting 0 ms 2023-07-24 18:10:49,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:49,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=58 2023-07-24 18:10:49,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; CloseRegionProcedure b7eb9c59d6671ee68da297b518c3d69f, server=jenkins-hbase4.apache.org,43449,1690222239527 in 204 msec 2023-07-24 18:10:49,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55. 2023-07-24 18:10:49,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3db37070443177e7e2d98fa661f48d55: 2023-07-24 18:10:49,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=55 2023-07-24 18:10:49,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=55, state=SUCCESS; CloseRegionProcedure 5c6648a7cb76bf9547578b1066176197, server=jenkins-hbase4.apache.org,41915,1690222243305 in 200 msec 2023-07-24 18:10:49,655 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=b7eb9c59d6671ee68da297b518c3d69f, UNASSIGN in 233 msec 2023-07-24 18:10:49,656 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,656 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cded9eb3674256077270274766530d6b, disabling compactions & flushes 2023-07-24 18:10:49,658 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=51 2023-07-24 18:10:49,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:49,658 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=51, state=SUCCESS; CloseRegionProcedure d9b9dfe2f03499bc733af95b9e7d2fe1, server=jenkins-hbase4.apache.org,35913,1690222239741 in 217 msec 2023-07-24 18:10:49,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:49,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. after waiting 0 ms 2023-07-24 18:10:49,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:49,659 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=3db37070443177e7e2d98fa661f48d55, regionState=CLOSED 2023-07-24 18:10:49,659 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249659"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249659"}]},"ts":"1690222249659"} 2023-07-24 18:10:49,660 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,660 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5c6648a7cb76bf9547578b1066176197, UNASSIGN in 236 msec 2023-07-24 18:10:49,661 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e. 2023-07-24 18:10:49,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f33ab4573a17eaccee0a8a96fbb4b09e: 2023-07-24 18:10:49,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd. 2023-07-24 18:10:49,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e514cdd4cdfd34aeb0d9a95efa3cb7bd: 2023-07-24 18:10:49,663 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=d9b9dfe2f03499bc733af95b9e7d2fe1, UNASSIGN in 243 msec 2023-07-24 18:10:49,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=53 2023-07-24 18:10:49,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,665 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=53, state=SUCCESS; CloseRegionProcedure 3db37070443177e7e2d98fa661f48d55, server=jenkins-hbase4.apache.org,37467,1690222246245 in 210 msec 2023-07-24 18:10:49,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9f428bb109715eb70d4ca718e4a695f5, disabling compactions & flushes 2023-07-24 18:10:49,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:49,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:49,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. after waiting 0 ms 2023-07-24 18:10:49,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:49,667 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b. 2023-07-24 18:10:49,667 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cded9eb3674256077270274766530d6b: 2023-07-24 18:10:49,668 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=f33ab4573a17eaccee0a8a96fbb4b09e, regionState=CLOSED 2023-07-24 18:10:49,668 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249668"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249668"}]},"ts":"1690222249668"} 2023-07-24 18:10:49,668 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,669 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=e514cdd4cdfd34aeb0d9a95efa3cb7bd, regionState=CLOSED 2023-07-24 18:10:49,669 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249669"}]},"ts":"1690222249669"} 2023-07-24 18:10:49,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,669 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3db37070443177e7e2d98fa661f48d55, UNASSIGN in 247 msec 2023-07-24 18:10:49,670 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=cded9eb3674256077270274766530d6b, regionState=CLOSED 2023-07-24 18:10:49,670 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690222248154.cded9eb3674256077270274766530d6b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249670"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249670"}]},"ts":"1690222249670"} 2023-07-24 18:10:49,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:49,672 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5. 2023-07-24 18:10:49,672 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9f428bb109715eb70d4ca718e4a695f5: 2023-07-24 18:10:49,674 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=50 2023-07-24 18:10:49,674 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=50, state=SUCCESS; CloseRegionProcedure f33ab4573a17eaccee0a8a96fbb4b09e, server=jenkins-hbase4.apache.org,41915,1690222243305 in 240 msec 2023-07-24 18:10:49,674 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,675 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=56 2023-07-24 18:10:49,675 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=56, state=SUCCESS; CloseRegionProcedure e514cdd4cdfd34aeb0d9a95efa3cb7bd, server=jenkins-hbase4.apache.org,43449,1690222239527 in 228 msec 2023-07-24 18:10:49,675 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=9f428bb109715eb70d4ca718e4a695f5, regionState=CLOSED 2023-07-24 18:10:49,675 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222249675"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249675"}]},"ts":"1690222249675"} 2023-07-24 18:10:49,676 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=49 2023-07-24 18:10:49,676 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=f33ab4573a17eaccee0a8a96fbb4b09e, UNASSIGN in 259 msec 2023-07-24 18:10:49,676 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=49, state=SUCCESS; CloseRegionProcedure cded9eb3674256077270274766530d6b, server=jenkins-hbase4.apache.org,37467,1690222246245 in 244 msec 2023-07-24 18:10:49,677 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=e514cdd4cdfd34aeb0d9a95efa3cb7bd, UNASSIGN in 256 msec 2023-07-24 18:10:49,678 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cded9eb3674256077270274766530d6b, UNASSIGN in 261 msec 2023-07-24 18:10:49,680 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=52 2023-07-24 18:10:49,680 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=52, state=SUCCESS; CloseRegionProcedure 9f428bb109715eb70d4ca718e4a695f5, server=jenkins-hbase4.apache.org,41915,1690222243305 in 225 msec 2023-07-24 18:10:49,681 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=48 2023-07-24 18:10:49,682 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=48, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9f428bb109715eb70d4ca718e4a695f5, UNASSIGN in 261 msec 2023-07-24 18:10:49,682 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222249682"}]},"ts":"1690222249682"} 2023-07-24 18:10:49,684 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLED in hbase:meta 2023-07-24 18:10:49,686 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testCreateMultiRegion to state=DISABLED 2023-07-24 18:10:49,688 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion in 284 msec 2023-07-24 18:10:49,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=48 2023-07-24 18:10:49,711 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateMultiRegion, procId: 48 completed 2023-07-24 18:10:49,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateMultiRegion 2023-07-24 18:10:49,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:10:49,714 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=69, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:10:49,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateMultiRegion' from rsgroup 'default' 2023-07-24 18:10:49,715 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=69, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:10:49,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:49,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:49,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:49,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-24 18:10:49,733 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,734 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,735 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,735 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,735 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,735 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,735 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,735 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,740 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197/recovered.edits] 2023-07-24 18:10:49,741 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5/recovered.edits] 2023-07-24 18:10:49,741 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1/recovered.edits] 2023-07-24 18:10:49,743 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b/recovered.edits] 2023-07-24 18:10:49,744 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf/recovered.edits] 2023-07-24 18:10:49,744 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55/recovered.edits] 2023-07-24 18:10:49,744 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd/recovered.edits] 2023-07-24 18:10:49,744 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e/recovered.edits] 2023-07-24 18:10:49,766 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5/recovered.edits/4.seqid 2023-07-24 18:10:49,766 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd/recovered.edits/4.seqid 2023-07-24 18:10:49,767 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/9f428bb109715eb70d4ca718e4a695f5 2023-07-24 18:10:49,767 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,772 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197/recovered.edits/4.seqid 2023-07-24 18:10:49,772 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/e514cdd4cdfd34aeb0d9a95efa3cb7bd 2023-07-24 18:10:49,772 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,773 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf/recovered.edits/4.seqid 2023-07-24 18:10:49,773 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e/recovered.edits/4.seqid 2023-07-24 18:10:49,774 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5c6648a7cb76bf9547578b1066176197 2023-07-24 18:10:49,775 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55/recovered.edits/4.seqid 2023-07-24 18:10:49,775 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f0b3ad268556b841275517b26b1fdacf 2023-07-24 18:10:49,775 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b/recovered.edits/4.seqid 2023-07-24 18:10:49,776 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/f33ab4573a17eaccee0a8a96fbb4b09e 2023-07-24 18:10:49,778 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1/recovered.edits/4.seqid 2023-07-24 18:10:49,783 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d/recovered.edits] 2023-07-24 18:10:49,784 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/3db37070443177e7e2d98fa661f48d55 2023-07-24 18:10:49,784 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/cded9eb3674256077270274766530d6b 2023-07-24 18:10:49,784 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/d9b9dfe2f03499bc733af95b9e7d2fe1 2023-07-24 18:10:49,784 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f/recovered.edits] 2023-07-24 18:10:49,794 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d/recovered.edits/4.seqid 2023-07-24 18:10:49,795 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f/recovered.edits/4.seqid 2023-07-24 18:10:49,795 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/5926c5ea121381a09f35e016b6edea1d 2023-07-24 18:10:49,795 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateMultiRegion/b7eb9c59d6671ee68da297b518c3d69f 2023-07-24 18:10:49,795 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-24 18:10:49,799 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=69, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:10:49,805 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 10 rows of Group_testCreateMultiRegion from hbase:meta 2023-07-24 18:10:49,808 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateMultiRegion' descriptor. 2023-07-24 18:10:49,810 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=69, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:10:49,810 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateMultiRegion' from region states. 2023-07-24 18:10:49,810 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1690222248154.cded9eb3674256077270274766530d6b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,810 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,810 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,810 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,810 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,810 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,811 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,811 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,811 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,811 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222249810"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,813 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 10 regions from META 2023-07-24 18:10:49,813 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cded9eb3674256077270274766530d6b, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1690222248154.cded9eb3674256077270274766530d6b.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, {ENCODED => f33ab4573a17eaccee0a8a96fbb4b09e, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1690222248154.f33ab4573a17eaccee0a8a96fbb4b09e.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, {ENCODED => d9b9dfe2f03499bc733af95b9e7d2fe1, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1690222248154.d9b9dfe2f03499bc733af95b9e7d2fe1.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, {ENCODED => 9f428bb109715eb70d4ca718e4a695f5, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1690222248154.9f428bb109715eb70d4ca718e4a695f5.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, {ENCODED => 3db37070443177e7e2d98fa661f48d55, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1690222248154.3db37070443177e7e2d98fa661f48d55.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, {ENCODED => f0b3ad268556b841275517b26b1fdacf, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1690222248154.f0b3ad268556b841275517b26b1fdacf.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, {ENCODED => 5c6648a7cb76bf9547578b1066176197, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1690222248154.5c6648a7cb76bf9547578b1066176197.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, {ENCODED => e514cdd4cdfd34aeb0d9a95efa3cb7bd, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1690222248154.e514cdd4cdfd34aeb0d9a95efa3cb7bd.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, {ENCODED => 5926c5ea121381a09f35e016b6edea1d, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1690222248154.5926c5ea121381a09f35e016b6edea1d.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, {ENCODED => b7eb9c59d6671ee68da297b518c3d69f, NAME => 'Group_testCreateMultiRegion,,1690222248154.b7eb9c59d6671ee68da297b518c3d69f.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}] 2023-07-24 18:10:49,813 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateMultiRegion' as deleted. 2023-07-24 18:10:49,814 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222249813"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:49,815 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateMultiRegion state from META 2023-07-24 18:10:49,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-24 18:10:49,823 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=69, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:10:49,825 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion in 111 msec 2023-07-24 18:10:50,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-24 18:10:50,024 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateMultiRegion, procId: 69 completed 2023-07-24 18:10:50,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:50,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:50,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:50,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:50,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:50,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:50,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:50,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:50,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:50,043 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:50,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:50,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:50,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:50,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:50,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:50,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:50,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:50,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223450058, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:50,059 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:50,061 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:50,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,062 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:50,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:50,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:50,081 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=506 (was 499) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1280172802_17 at /127.0.0.1:36134 [Waiting for operation #17] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1459919364_17 at /127.0.0.1:60560 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1173938704_17 at /127.0.0.1:60388 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1e045fb3-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-372597080_17 at /127.0.0.1:60600 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=808 (was 772) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=591 (was 599), ProcessCount=177 (was 177), AvailableMemoryMB=5511 (was 5536) 2023-07-24 18:10:50,081 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-24 18:10:50,098 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=506, OpenFileDescriptor=808, MaxFileDescriptor=60000, SystemLoadAverage=591, ProcessCount=177, AvailableMemoryMB=5510 2023-07-24 18:10:50,098 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-24 18:10:50,098 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testNamespaceCreateAndAssign 2023-07-24 18:10:50,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:50,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:50,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:50,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:50,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:50,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:50,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:50,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:50,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:50,122 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:50,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:50,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:50,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:50,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:50,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:50,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:50,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:50,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 275 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223450136, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:50,137 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:50,139 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:50,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,140 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:50,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:50,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:50,141 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(118): testNamespaceCreateAndAssign 2023-07-24 18:10:50,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:50,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:50,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup appInfo 2023-07-24 18:10:50,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:50,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:50,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:50,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:50,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:50,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35913] to rsgroup appInfo 2023-07-24 18:10:50,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:50,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:50,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:50,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:50,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(238): Moving server region b3e0fb36cbe9750f5f2b47d078547932, which do not belong to RSGroup appInfo 2023-07-24 18:10:50,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=70, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE 2023-07-24 18:10:50,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 18:10:50,179 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE 2023-07-24 18:10:50,180 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:50,180 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222250180"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222250180"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222250180"}]},"ts":"1690222250180"} 2023-07-24 18:10:50,183 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=70, state=RUNNABLE; CloseRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:50,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:50,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3e0fb36cbe9750f5f2b47d078547932, disabling compactions & flushes 2023-07-24 18:10:50,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:50,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:50,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. after waiting 0 ms 2023-07-24 18:10:50,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:50,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b3e0fb36cbe9750f5f2b47d078547932 1/1 column families, dataSize=150 B heapSize=632 B 2023-07-24 18:10:50,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=150 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/.tmp/info/99881a762fd443059bf23593fedbb752 2023-07-24 18:10:50,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/.tmp/info/99881a762fd443059bf23593fedbb752 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/99881a762fd443059bf23593fedbb752 2023-07-24 18:10:50,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/99881a762fd443059bf23593fedbb752, entries=3, sequenceid=7, filesize=4.9 K 2023-07-24 18:10:50,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~150 B/150, heapSize ~616 B/616, currentSize=0 B/0 for b3e0fb36cbe9750f5f2b47d078547932 in 32ms, sequenceid=7, compaction requested=false 2023-07-24 18:10:50,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-24 18:10:50,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:50,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:10:50,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b3e0fb36cbe9750f5f2b47d078547932 move to jenkins-hbase4.apache.org,37467,1690222246245 record at close sequenceid=7 2023-07-24 18:10:50,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:50,409 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=CLOSED 2023-07-24 18:10:50,410 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222250409"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222250409"}]},"ts":"1690222250409"} 2023-07-24 18:10:50,416 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=70 2023-07-24 18:10:50,416 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=70, state=SUCCESS; CloseRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,35913,1690222239741 in 230 msec 2023-07-24 18:10:50,417 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=70, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37467,1690222246245; forceNewPlan=false, retain=false 2023-07-24 18:10:50,568 INFO [jenkins-hbase4:34677] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:50,568 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:50,568 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222250568"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222250568"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222250568"}]},"ts":"1690222250568"} 2023-07-24 18:10:50,570 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=70, state=RUNNABLE; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,37467,1690222246245}] 2023-07-24 18:10:50,726 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:50,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3e0fb36cbe9750f5f2b47d078547932, NAME => 'hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:50,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:50,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:50,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:50,727 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:50,729 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:50,730 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:10:50,730 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:10:50,730 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3e0fb36cbe9750f5f2b47d078547932 columnFamilyName info 2023-07-24 18:10:50,740 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/99881a762fd443059bf23593fedbb752 2023-07-24 18:10:50,741 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(310): Store=b3e0fb36cbe9750f5f2b47d078547932/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:50,742 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:50,746 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:50,750 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:50,751 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3e0fb36cbe9750f5f2b47d078547932; next sequenceid=11; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9426991360, jitterRate=-0.12204301357269287}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:50,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:10:50,753 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932., pid=72, masterSystemTime=1690222250722 2023-07-24 18:10:50,755 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:50,755 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:50,755 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPEN, openSeqNum=11, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:50,755 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222250755"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222250755"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222250755"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222250755"}]},"ts":"1690222250755"} 2023-07-24 18:10:50,759 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=70 2023-07-24 18:10:50,759 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=70, state=SUCCESS; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,37467,1690222246245 in 187 msec 2023-07-24 18:10:50,761 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE in 582 msec 2023-07-24 18:10:51,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure.ProcedureSyncWait(216): waitFor pid=70 2023-07-24 18:10:51,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35913,1690222239741] are moved back to default 2023-07-24 18:10:51,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-24 18:10:51,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:51,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:51,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:51,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=appInfo 2023-07-24 18:10:51,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:51,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'appInfo'} 2023-07-24 18:10:51,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=73, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 18:10:51,195 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35913] ipc.CallRunner(144): callId: 186 service: ClientService methodName: Get size: 120 connection: 172.31.14.131:58278 deadline: 1690222311195, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=37467 startCode=1690222246245. As of locationSeqNum=7. 2023-07-24 18:10:51,298 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:51,300 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54542, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:51,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=73 2023-07-24 18:10:51,308 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:51,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 117 msec 2023-07-24 18:10:51,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=73 2023-07-24 18:10:51,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:51,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:51,410 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:51,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_foo" qualifier: "Group_testCreateAndAssign" procId is: 74 2023-07-24 18:10:51,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-24 18:10:51,412 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:51,412 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:51,413 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:51,413 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:51,415 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:51,417 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:51,418 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1 empty. 2023-07-24 18:10:51,418 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:51,418 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-24 18:10:51,435 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_foo/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:51,436 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1918a26cdd93e70fd14d0ef6e18293a1, NAME => 'Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:51,449 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:51,449 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 1918a26cdd93e70fd14d0ef6e18293a1, disabling compactions & flushes 2023-07-24 18:10:51,449 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:51,449 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:51,449 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. after waiting 0 ms 2023-07-24 18:10:51,449 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:51,449 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:51,449 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 1918a26cdd93e70fd14d0ef6e18293a1: 2023-07-24 18:10:51,452 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:51,453 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690222251453"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222251453"}]},"ts":"1690222251453"} 2023-07-24 18:10:51,455 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:51,457 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:51,457 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222251457"}]},"ts":"1690222251457"} 2023-07-24 18:10:51,460 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-24 18:10:51,467 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=1918a26cdd93e70fd14d0ef6e18293a1, ASSIGN}] 2023-07-24 18:10:51,470 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, ppid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=1918a26cdd93e70fd14d0ef6e18293a1, ASSIGN 2023-07-24 18:10:51,471 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=75, ppid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=1918a26cdd93e70fd14d0ef6e18293a1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35913,1690222239741; forceNewPlan=false, retain=false 2023-07-24 18:10:51,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-24 18:10:51,622 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=1918a26cdd93e70fd14d0ef6e18293a1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:51,622 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690222251622"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222251622"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222251622"}]},"ts":"1690222251622"} 2023-07-24 18:10:51,624 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; OpenRegionProcedure 1918a26cdd93e70fd14d0ef6e18293a1, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:51,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-24 18:10:51,782 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:51,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1918a26cdd93e70fd14d0ef6e18293a1, NAME => 'Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:51,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:51,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:51,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:51,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:51,784 INFO [StoreOpener-1918a26cdd93e70fd14d0ef6e18293a1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:51,786 DEBUG [StoreOpener-1918a26cdd93e70fd14d0ef6e18293a1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1/f 2023-07-24 18:10:51,786 DEBUG [StoreOpener-1918a26cdd93e70fd14d0ef6e18293a1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1/f 2023-07-24 18:10:51,787 INFO [StoreOpener-1918a26cdd93e70fd14d0ef6e18293a1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1918a26cdd93e70fd14d0ef6e18293a1 columnFamilyName f 2023-07-24 18:10:51,787 INFO [StoreOpener-1918a26cdd93e70fd14d0ef6e18293a1-1] regionserver.HStore(310): Store=1918a26cdd93e70fd14d0ef6e18293a1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:51,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:51,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:51,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:51,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:51,795 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1918a26cdd93e70fd14d0ef6e18293a1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10914296960, jitterRate=0.016473114490509033}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:51,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1918a26cdd93e70fd14d0ef6e18293a1: 2023-07-24 18:10:51,796 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1., pid=76, masterSystemTime=1690222251776 2023-07-24 18:10:51,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:51,798 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:51,798 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=1918a26cdd93e70fd14d0ef6e18293a1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:51,799 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690222251798"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222251798"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222251798"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222251798"}]},"ts":"1690222251798"} 2023-07-24 18:10:51,803 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-24 18:10:51,803 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; OpenRegionProcedure 1918a26cdd93e70fd14d0ef6e18293a1, server=jenkins-hbase4.apache.org,35913,1690222239741 in 177 msec 2023-07-24 18:10:51,805 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=74 2023-07-24 18:10:51,805 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=74, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=1918a26cdd93e70fd14d0ef6e18293a1, ASSIGN in 336 msec 2023-07-24 18:10:51,805 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:51,805 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222251805"}]},"ts":"1690222251805"} 2023-07-24 18:10:51,807 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-24 18:10:51,809 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:51,811 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign in 402 msec 2023-07-24 18:10:52,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-24 18:10:52,015 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 74 completed 2023-07-24 18:10:52,016 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:52,023 INFO [Listener at localhost/44627] client.HBaseAdmin$15(890): Started disable of Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-24 18:10:52,029 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222252029"}]},"ts":"1690222252029"} 2023-07-24 18:10:52,030 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-24 18:10:52,034 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_foo:Group_testCreateAndAssign to state=DISABLING 2023-07-24 18:10:52,035 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=77, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=1918a26cdd93e70fd14d0ef6e18293a1, UNASSIGN}] 2023-07-24 18:10:52,036 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, ppid=77, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=1918a26cdd93e70fd14d0ef6e18293a1, UNASSIGN 2023-07-24 18:10:52,037 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=1918a26cdd93e70fd14d0ef6e18293a1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:52,037 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690222252037"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222252037"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222252037"}]},"ts":"1690222252037"} 2023-07-24 18:10:52,039 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure 1918a26cdd93e70fd14d0ef6e18293a1, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:52,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-24 18:10:52,235 INFO [AsyncFSWAL-0-hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData-prefix:jenkins-hbase4.apache.org,34677,1690222237492] wal.AbstractFSWAL(1141): Slow sync cost: 194 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK]] 2023-07-24 18:10:52,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:52,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1918a26cdd93e70fd14d0ef6e18293a1, disabling compactions & flushes 2023-07-24 18:10:52,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:52,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:52,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. after waiting 0 ms 2023-07-24 18:10:52,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:52,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:52,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1. 2023-07-24 18:10:52,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1918a26cdd93e70fd14d0ef6e18293a1: 2023-07-24 18:10:52,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:52,399 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=1918a26cdd93e70fd14d0ef6e18293a1, regionState=CLOSED 2023-07-24 18:10:52,400 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1690222252399"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222252399"}]},"ts":"1690222252399"} 2023-07-24 18:10:52,403 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-24 18:10:52,403 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure 1918a26cdd93e70fd14d0ef6e18293a1, server=jenkins-hbase4.apache.org,35913,1690222239741 in 362 msec 2023-07-24 18:10:52,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=77 2023-07-24 18:10:52,405 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=77, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=1918a26cdd93e70fd14d0ef6e18293a1, UNASSIGN in 368 msec 2023-07-24 18:10:52,406 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222252406"}]},"ts":"1690222252406"} 2023-07-24 18:10:52,407 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-24 18:10:52,409 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_foo:Group_testCreateAndAssign to state=DISABLED 2023-07-24 18:10:52,411 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign in 386 msec 2023-07-24 18:10:52,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-24 18:10:52,435 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 77 completed 2023-07-24 18:10:52,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,443 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_foo:Group_testCreateAndAssign' from rsgroup 'appInfo' 2023-07-24 18:10:52,444 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:52,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:52,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:52,449 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:52,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-24 18:10:52,450 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1/recovered.edits] 2023-07-24 18:10:52,459 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1/recovered.edits/4.seqid 2023-07-24 18:10:52,459 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_foo/Group_testCreateAndAssign/1918a26cdd93e70fd14d0ef6e18293a1 2023-07-24 18:10:52,460 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-24 18:10:52,462 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,464 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_foo:Group_testCreateAndAssign from hbase:meta 2023-07-24 18:10:52,466 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_foo:Group_testCreateAndAssign' descriptor. 2023-07-24 18:10:52,467 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,467 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_foo:Group_testCreateAndAssign' from region states. 2023-07-24 18:10:52,467 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222252467"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:52,469 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:52,469 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1918a26cdd93e70fd14d0ef6e18293a1, NAME => 'Group_foo:Group_testCreateAndAssign,,1690222251406.1918a26cdd93e70fd14d0ef6e18293a1.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:52,469 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_foo:Group_testCreateAndAssign' as deleted. 2023-07-24 18:10:52,469 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222252469"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:52,471 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_foo:Group_testCreateAndAssign state from META 2023-07-24 18:10:52,475 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:10:52,476 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign in 39 msec 2023-07-24 18:10:52,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-24 18:10:52,555 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 80 completed 2023-07-24 18:10:52,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-24 18:10:52,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:10:52,577 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:10:52,581 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:10:52,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 18:10:52,584 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:10:52,585 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 18:10:52,585 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:52,586 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:10:52,588 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:10:52,589 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 19 msec 2023-07-24 18:10:52,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 18:10:52,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:52,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:52,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:52,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:52,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:52,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:52,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:52,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:52,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:52,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:52,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:52,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:52,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:52,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:52,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35913] to rsgroup default 2023-07-24 18:10:52,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:52,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:52,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-24 18:10:52,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35913,1690222239741] are moved back to appInfo 2023-07-24 18:10:52,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-24 18:10:52,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:52,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup appInfo 2023-07-24 18:10:52,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:52,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:52,733 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:52,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:52,736 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:10:52,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:52,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:52,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:52,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:52,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:52,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:52,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:52,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 364 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223452748, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:52,749 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:52,751 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:52,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:52,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:52,753 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:52,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:52,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:52,776 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=511 (was 506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-372597080_17 at /127.0.0.1:36134 [Waiting for operation #21] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1971928419_17 at /127.0.0.1:42232 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1875798778_17 at /127.0.0.1:60560 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-372597080_17 at /127.0.0.1:60388 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1971928419_17 at /127.0.0.1:60600 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=812 (was 808) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=591 (was 591), ProcessCount=177 (was 177), AvailableMemoryMB=5419 (was 5510) 2023-07-24 18:10:52,776 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-24 18:10:52,799 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=511, OpenFileDescriptor=812, MaxFileDescriptor=60000, SystemLoadAverage=591, ProcessCount=177, AvailableMemoryMB=5417 2023-07-24 18:10:52,800 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-24 18:10:52,800 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testCreateAndDrop 2023-07-24 18:10:52,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:52,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:52,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:52,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:52,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:52,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:52,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:52,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:52,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:52,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:52,819 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:52,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:52,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:52,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:52,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:52,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:52,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:52,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:52,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:52,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 392 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223452834, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:52,835 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:52,836 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:52,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:52,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:52,838 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:52,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:52,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:52,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:52,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=82, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:10:52,846 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:52,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndDrop" procId is: 82 2023-07-24 18:10:52,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-24 18:10:52,849 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,849 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:52,850 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:52,853 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:52,855 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:52,856 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b empty. 2023-07-24 18:10:52,856 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:52,856 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-24 18:10:52,884 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:52,885 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => e1623f30436bb1f18adf7e739d34c10b, NAME => 'Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:52,900 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:52,900 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1604): Closing e1623f30436bb1f18adf7e739d34c10b, disabling compactions & flushes 2023-07-24 18:10:52,900 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:52,900 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:52,900 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. after waiting 0 ms 2023-07-24 18:10:52,900 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:52,901 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:52,901 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for e1623f30436bb1f18adf7e739d34c10b: 2023-07-24 18:10:52,911 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:52,913 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222252912"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222252912"}]},"ts":"1690222252912"} 2023-07-24 18:10:52,914 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:52,915 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:52,915 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222252915"}]},"ts":"1690222252915"} 2023-07-24 18:10:52,916 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLING in hbase:meta 2023-07-24 18:10:52,920 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:52,920 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:52,920 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:52,920 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:52,920 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 18:10:52,920 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:52,921 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=e1623f30436bb1f18adf7e739d34c10b, ASSIGN}] 2023-07-24 18:10:52,923 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=83, ppid=82, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=e1623f30436bb1f18adf7e739d34c10b, ASSIGN 2023-07-24 18:10:52,923 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=83, ppid=82, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=e1623f30436bb1f18adf7e739d34c10b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37467,1690222246245; forceNewPlan=false, retain=false 2023-07-24 18:10:52,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-24 18:10:53,074 INFO [jenkins-hbase4:34677] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:53,075 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=83 updating hbase:meta row=e1623f30436bb1f18adf7e739d34c10b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:53,075 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222253075"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253075"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253075"}]},"ts":"1690222253075"} 2023-07-24 18:10:53,077 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=84, ppid=83, state=RUNNABLE; OpenRegionProcedure e1623f30436bb1f18adf7e739d34c10b, server=jenkins-hbase4.apache.org,37467,1690222246245}] 2023-07-24 18:10:53,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-24 18:10:53,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:53,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1623f30436bb1f18adf7e739d34c10b, NAME => 'Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:53,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndDrop e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,240 INFO [StoreOpener-e1623f30436bb1f18adf7e739d34c10b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,241 DEBUG [StoreOpener-e1623f30436bb1f18adf7e739d34c10b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b/cf 2023-07-24 18:10:53,242 DEBUG [StoreOpener-e1623f30436bb1f18adf7e739d34c10b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b/cf 2023-07-24 18:10:53,242 INFO [StoreOpener-e1623f30436bb1f18adf7e739d34c10b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1623f30436bb1f18adf7e739d34c10b columnFamilyName cf 2023-07-24 18:10:53,243 INFO [StoreOpener-e1623f30436bb1f18adf7e739d34c10b-1] regionserver.HStore(310): Store=e1623f30436bb1f18adf7e739d34c10b/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:53,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:53,254 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e1623f30436bb1f18adf7e739d34c10b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11829940960, jitterRate=0.10174910724163055}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:53,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e1623f30436bb1f18adf7e739d34c10b: 2023-07-24 18:10:53,255 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b., pid=84, masterSystemTime=1690222253229 2023-07-24 18:10:53,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:53,256 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:53,257 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=83 updating hbase:meta row=e1623f30436bb1f18adf7e739d34c10b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:53,257 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222253257"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222253257"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222253257"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222253257"}]},"ts":"1690222253257"} 2023-07-24 18:10:53,266 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=84, resume processing ppid=83 2023-07-24 18:10:53,266 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=84, ppid=83, state=SUCCESS; OpenRegionProcedure e1623f30436bb1f18adf7e739d34c10b, server=jenkins-hbase4.apache.org,37467,1690222246245 in 182 msec 2023-07-24 18:10:53,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-24 18:10:53,268 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=e1623f30436bb1f18adf7e739d34c10b, ASSIGN in 346 msec 2023-07-24 18:10:53,269 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:53,269 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222253269"}]},"ts":"1690222253269"} 2023-07-24 18:10:53,271 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLED in hbase:meta 2023-07-24 18:10:53,274 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:53,277 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop in 431 msec 2023-07-24 18:10:53,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-24 18:10:53,451 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndDrop, procId: 82 completed 2023-07-24 18:10:53,451 DEBUG [Listener at localhost/44627] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateAndDrop get assigned. Timeout = 60000ms 2023-07-24 18:10:53,452 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,456 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateAndDrop assigned to meta. Checking AM states. 2023-07-24 18:10:53,456 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,457 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateAndDrop assigned. 2023-07-24 18:10:53,457 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,460 INFO [Listener at localhost/44627] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndDrop 2023-07-24 18:10:53,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateAndDrop 2023-07-24 18:10:53,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=85, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:10:53,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-24 18:10:53,464 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222253464"}]},"ts":"1690222253464"} 2023-07-24 18:10:53,466 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLING in hbase:meta 2023-07-24 18:10:53,468 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCreateAndDrop to state=DISABLING 2023-07-24 18:10:53,468 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=e1623f30436bb1f18adf7e739d34c10b, UNASSIGN}] 2023-07-24 18:10:53,470 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=e1623f30436bb1f18adf7e739d34c10b, UNASSIGN 2023-07-24 18:10:53,471 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=e1623f30436bb1f18adf7e739d34c10b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:53,471 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222253471"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253471"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253471"}]},"ts":"1690222253471"} 2023-07-24 18:10:53,472 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=87, ppid=86, state=RUNNABLE; CloseRegionProcedure e1623f30436bb1f18adf7e739d34c10b, server=jenkins-hbase4.apache.org,37467,1690222246245}] 2023-07-24 18:10:53,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-24 18:10:53,624 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,626 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e1623f30436bb1f18adf7e739d34c10b, disabling compactions & flushes 2023-07-24 18:10:53,626 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:53,626 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:53,626 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. after waiting 0 ms 2023-07-24 18:10:53,626 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:53,630 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:53,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b. 2023-07-24 18:10:53,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e1623f30436bb1f18adf7e739d34c10b: 2023-07-24 18:10:53,632 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,633 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=e1623f30436bb1f18adf7e739d34c10b, regionState=CLOSED 2023-07-24 18:10:53,633 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222253633"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222253633"}]},"ts":"1690222253633"} 2023-07-24 18:10:53,636 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=87, resume processing ppid=86 2023-07-24 18:10:53,636 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, ppid=86, state=SUCCESS; CloseRegionProcedure e1623f30436bb1f18adf7e739d34c10b, server=jenkins-hbase4.apache.org,37467,1690222246245 in 162 msec 2023-07-24 18:10:53,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-24 18:10:53,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=e1623f30436bb1f18adf7e739d34c10b, UNASSIGN in 168 msec 2023-07-24 18:10:53,638 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222253638"}]},"ts":"1690222253638"} 2023-07-24 18:10:53,639 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLED in hbase:meta 2023-07-24 18:10:53,641 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCreateAndDrop to state=DISABLED 2023-07-24 18:10:53,642 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop in 180 msec 2023-07-24 18:10:53,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-24 18:10:53,767 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndDrop, procId: 85 completed 2023-07-24 18:10:53,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateAndDrop 2023-07-24 18:10:53,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:10:53,770 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=88, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:10:53,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndDrop' from rsgroup 'default' 2023-07-24 18:10:53,771 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=88, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:10:53,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:53,775 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,777 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b/cf, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b/recovered.edits] 2023-07-24 18:10:53,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 18:10:53,782 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b/recovered.edits/4.seqid 2023-07-24 18:10:53,783 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCreateAndDrop/e1623f30436bb1f18adf7e739d34c10b 2023-07-24 18:10:53,783 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-24 18:10:53,786 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=88, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:10:53,788 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndDrop from hbase:meta 2023-07-24 18:10:53,789 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndDrop' descriptor. 2023-07-24 18:10:53,790 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=88, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:10:53,790 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndDrop' from region states. 2023-07-24 18:10:53,791 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222253790"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:53,792 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:53,792 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e1623f30436bb1f18adf7e739d34c10b, NAME => 'Group_testCreateAndDrop,,1690222252843.e1623f30436bb1f18adf7e739d34c10b.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:53,792 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndDrop' as deleted. 2023-07-24 18:10:53,792 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222253792"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:53,793 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndDrop state from META 2023-07-24 18:10:53,795 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=88, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:10:53,796 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop in 28 msec 2023-07-24 18:10:53,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-24 18:10:53,881 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndDrop, procId: 88 completed 2023-07-24 18:10:53,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:53,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:53,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:53,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:53,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:53,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:53,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:53,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:53,903 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:53,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:53,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:53,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:53,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:53,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:53,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 451 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223453916, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:53,917 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:53,922 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,924 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:53,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:53,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:53,943 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=511 (was 511), OpenFileDescriptor=812 (was 812), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=576 (was 591), ProcessCount=177 (was 177), AvailableMemoryMB=5404 (was 5417) 2023-07-24 18:10:53,943 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-24 18:10:53,961 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=511, OpenFileDescriptor=812, MaxFileDescriptor=60000, SystemLoadAverage=576, ProcessCount=177, AvailableMemoryMB=5403 2023-07-24 18:10:53,961 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-24 18:10:53,961 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testCloneSnapshot 2023-07-24 18:10:53,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:53,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:53,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:53,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:53,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:53,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:53,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:53,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:53,977 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:53,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:53,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:53,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:53,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:53,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:53,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 479 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223453988, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:53,989 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:53,990 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,992 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:53,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:53,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:53,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:53,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=89, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:10:53,997 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:53,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCloneSnapshot" procId is: 89 2023-07-24 18:10:53,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=89 2023-07-24 18:10:53,999 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,000 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:54,000 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:54,002 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:54,004 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:54,004 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb empty. 2023-07-24 18:10:54,005 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:54,005 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-24 18:10:54,020 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:54,021 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => 015fbcc6e3e4b8e7427f2ee47ae9f5cb, NAME => 'Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:54,039 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:54,039 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1604): Closing 015fbcc6e3e4b8e7427f2ee47ae9f5cb, disabling compactions & flushes 2023-07-24 18:10:54,039 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:54,039 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:54,039 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. after waiting 0 ms 2023-07-24 18:10:54,039 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:54,039 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:54,039 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for 015fbcc6e3e4b8e7427f2ee47ae9f5cb: 2023-07-24 18:10:54,042 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:54,043 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222254043"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222254043"}]},"ts":"1690222254043"} 2023-07-24 18:10:54,044 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:54,045 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:54,045 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222254045"}]},"ts":"1690222254045"} 2023-07-24 18:10:54,047 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLING in hbase:meta 2023-07-24 18:10:54,056 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:54,056 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:54,057 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:54,057 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:54,057 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 18:10:54,057 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:54,057 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=015fbcc6e3e4b8e7427f2ee47ae9f5cb, ASSIGN}] 2023-07-24 18:10:54,059 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, ppid=89, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=015fbcc6e3e4b8e7427f2ee47ae9f5cb, ASSIGN 2023-07-24 18:10:54,060 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=90, ppid=89, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=015fbcc6e3e4b8e7427f2ee47ae9f5cb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43449,1690222239527; forceNewPlan=false, retain=false 2023-07-24 18:10:54,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=89 2023-07-24 18:10:54,211 INFO [jenkins-hbase4:34677] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:54,212 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=015fbcc6e3e4b8e7427f2ee47ae9f5cb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,212 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222254212"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222254212"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222254212"}]},"ts":"1690222254212"} 2023-07-24 18:10:54,218 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; OpenRegionProcedure 015fbcc6e3e4b8e7427f2ee47ae9f5cb, server=jenkins-hbase4.apache.org,43449,1690222239527}] 2023-07-24 18:10:54,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=89 2023-07-24 18:10:54,374 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:54,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 015fbcc6e3e4b8e7427f2ee47ae9f5cb, NAME => 'Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:54,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot 015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:54,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:54,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:54,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:54,376 INFO [StoreOpener-015fbcc6e3e4b8e7427f2ee47ae9f5cb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region 015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:54,378 DEBUG [StoreOpener-015fbcc6e3e4b8e7427f2ee47ae9f5cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb/test 2023-07-24 18:10:54,378 DEBUG [StoreOpener-015fbcc6e3e4b8e7427f2ee47ae9f5cb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb/test 2023-07-24 18:10:54,379 INFO [StoreOpener-015fbcc6e3e4b8e7427f2ee47ae9f5cb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 015fbcc6e3e4b8e7427f2ee47ae9f5cb columnFamilyName test 2023-07-24 18:10:54,379 INFO [StoreOpener-015fbcc6e3e4b8e7427f2ee47ae9f5cb-1] regionserver.HStore(310): Store=015fbcc6e3e4b8e7427f2ee47ae9f5cb/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:54,380 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:54,380 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:54,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:54,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:54,386 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 015fbcc6e3e4b8e7427f2ee47ae9f5cb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11542848640, jitterRate=0.07501155138015747}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:54,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 015fbcc6e3e4b8e7427f2ee47ae9f5cb: 2023-07-24 18:10:54,387 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb., pid=91, masterSystemTime=1690222254370 2023-07-24 18:10:54,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:54,389 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:54,389 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=015fbcc6e3e4b8e7427f2ee47ae9f5cb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,390 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222254389"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222254389"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222254389"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222254389"}]},"ts":"1690222254389"} 2023-07-24 18:10:54,393 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-24 18:10:54,393 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; OpenRegionProcedure 015fbcc6e3e4b8e7427f2ee47ae9f5cb, server=jenkins-hbase4.apache.org,43449,1690222239527 in 176 msec 2023-07-24 18:10:54,395 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-24 18:10:54,395 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=015fbcc6e3e4b8e7427f2ee47ae9f5cb, ASSIGN in 336 msec 2023-07-24 18:10:54,396 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:54,396 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222254396"}]},"ts":"1690222254396"} 2023-07-24 18:10:54,398 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLED in hbase:meta 2023-07-24 18:10:54,400 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:54,402 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot in 406 msec 2023-07-24 18:10:54,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=89 2023-07-24 18:10:54,602 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCloneSnapshot, procId: 89 completed 2023-07-24 18:10:54,602 DEBUG [Listener at localhost/44627] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCloneSnapshot get assigned. Timeout = 60000ms 2023-07-24 18:10:54,603 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:54,607 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(3484): All regions for table Group_testCloneSnapshot assigned to meta. Checking AM states. 2023-07-24 18:10:54,607 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:54,607 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(3504): All regions for table Group_testCloneSnapshot assigned. 2023-07-24 18:10:54,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1583): Client=jenkins//172.31.14.131 snapshot request for:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-24 18:10:54,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] snapshot.SnapshotDescriptionUtils(316): Creation time not specified, setting to:1690222254621 (current time:1690222254621). 2023-07-24 18:10:54,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] snapshot.SnapshotDescriptionUtils(332): Snapshot current TTL value: 0 resetting it to default value: 0 2023-07-24 18:10:54,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] zookeeper.ReadOnlyZKClient(139): Connect 0x0c287ed4 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:54,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7587394f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:54,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:54,634 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56114, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:54,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0c287ed4 to 127.0.0.1:59012 2023-07-24 18:10:54,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] snapshot.SnapshotManager(601): No existing snapshot, attempting snapshot... 2023-07-24 18:10:54,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] snapshot.SnapshotManager(648): Table enabled, starting distributed snapshots for { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-24 18:10:54,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=92, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 18:10:54,656 DEBUG [PEWorker-5] locking.LockProcedure(309): LOCKED pid=92, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 18:10:54,657 INFO [PEWorker-5] procedure2.TimeoutExecutorThread(81): ADDED pid=92, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE; timeout=600000, timestamp=1690222854657 2023-07-24 18:10:54,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] snapshot.SnapshotManager(653): Started snapshot: { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-24 18:10:54,657 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(174): Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot 2023-07-24 18:10:54,659 DEBUG [PEWorker-1] locking.LockProcedure(242): UNLOCKED pid=92, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 18:10:54,660 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 18:10:54,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=92, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE in 11 msec 2023-07-24 18:10:54,661 DEBUG [PEWorker-1] locking.LockProcedure(309): LOCKED pid=93, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 18:10:54,662 INFO [PEWorker-1] procedure2.TimeoutExecutorThread(81): ADDED pid=93, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED; timeout=600000, timestamp=1690222854662 2023-07-24 18:10:54,664 DEBUG [Listener at localhost/44627] client.HBaseAdmin(2418): Waiting a max of 300000 ms for snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }'' to complete. (max 20000 ms per retry) 2023-07-24 18:10:54,665 DEBUG [Listener at localhost/44627] client.HBaseAdmin(2428): (#1) Sleeping: 100ms while waiting for snapshot completion. 2023-07-24 18:10:54,683 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] procedure.ProcedureCoordinator(165): Submitting procedure Group_testCloneSnapshot_snap 2023-07-24 18:10:54,684 INFO [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'Group_testCloneSnapshot_snap' 2023-07-24 18:10:54,684 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 18:10:54,685 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'Group_testCloneSnapshot_snap' starting 'acquire' 2023-07-24 18:10:54,685 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'Group_testCloneSnapshot_snap', kicking off acquire phase on members. 2023-07-24 18:10:54,685 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,686 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,688 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 18:10:54,688 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 18:10:54,688 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 18:10:54,688 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 18:10:54,688 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 18:10:54,688 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 18:10:54,688 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:54,688 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 18:10:54,688 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:54,688 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 18:10:54,689 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:54,688 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,688 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:54,689 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,689 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,689 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,689 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,689 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,689 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,690 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-07-24 18:10:54,690 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,690 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,690 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-24 18:10:54,690 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,690 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-24 18:10:54,691 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-24 18:10:54,694 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-24 18:10:54,694 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-24 18:10:54,695 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-24 18:10:54,698 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-24 18:10:54,699 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-24 18:10:54,699 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 18:10:54,701 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-24 18:10:54,701 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-24 18:10:54,701 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,41915,1690222243305' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-24 18:10:54,702 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-24 18:10:54,703 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-24 18:10:54,703 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 18:10:54,703 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 18:10:54,704 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-24 18:10:54,705 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,705 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-24 18:10:54,705 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-24 18:10:54,704 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-24 18:10:54,707 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-24 18:10:54,707 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-24 18:10:54,707 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,35913,1690222239741' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-24 18:10:54,707 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,707 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-24 18:10:54,707 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-24 18:10:54,705 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-24 18:10:54,708 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,37467,1690222246245' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-24 18:10:54,708 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,43449,1690222239527' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-24 18:10:54,713 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,714 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,714 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,714 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,714 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-24 18:10:54,714 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-24 18:10:54,714 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-24 18:10:54,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-24 18:10:54,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-24 18:10:54,715 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,715 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-24 18:10:54,715 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,715 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-24 18:10:54,715 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 18:10:54,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:54,716 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:54,717 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:54,717 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-24 18:10:54,717 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,43449,1690222239527' joining acquired barrier for procedure 'Group_testCloneSnapshot_snap' on coordinator 2023-07-24 18:10:54,717 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'Group_testCloneSnapshot_snap' starting 'in-barrier' execution. 2023-07-24 18:10:54,717 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@1ec6e939[Count = 0] remaining members to acquire global barrier 2023-07-24 18:10:54,717 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,719 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,719 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,719 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,720 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,720 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,720 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-07-24 18:10:54,719 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,719 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,720 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,720 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,720 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,37467,1690222246245' in zk 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,41915,1690222243305' in zk 2023-07-24 18:10:54,720 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,35913,1690222239741' in zk 2023-07-24 18:10:54,721 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] snapshot.FlushSnapshotSubprocedure(170): Flush Snapshot Tasks submitted for 1 regions 2023-07-24 18:10:54,722 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(301): Waiting for local region snapshots to finish. 2023-07-24 18:10:54,722 DEBUG [rs(jenkins-hbase4.apache.org,43449,1690222239527)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Starting snapshot operation on Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:54,722 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-24 18:10:54,722 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 18:10:54,722 DEBUG [member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-24 18:10:54,724 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-24 18:10:54,724 DEBUG [rs(jenkins-hbase4.apache.org,43449,1690222239527)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(110): Flush Snapshotting region Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. started... 2023-07-24 18:10:54,724 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 18:10:54,724 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-24 18:10:54,724 DEBUG [member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-24 18:10:54,724 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 18:10:54,725 DEBUG [member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-24 18:10:54,726 DEBUG [rs(jenkins-hbase4.apache.org,43449,1690222239527)-snapshot-pool-0] regionserver.HRegion(2446): Flush status journal for 015fbcc6e3e4b8e7427f2ee47ae9f5cb: 2023-07-24 18:10:54,727 DEBUG [rs(jenkins-hbase4.apache.org,43449,1690222239527)-snapshot-pool-0] snapshot.SnapshotManifest(238): Storing 'Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.' region-info for snapshot=Group_testCloneSnapshot_snap 2023-07-24 18:10:54,734 DEBUG [rs(jenkins-hbase4.apache.org,43449,1690222239527)-snapshot-pool-0] snapshot.SnapshotManifest(243): Creating references for hfiles 2023-07-24 18:10:54,739 DEBUG [rs(jenkins-hbase4.apache.org,43449,1690222239527)-snapshot-pool-0] snapshot.SnapshotManifest(253): Adding snapshot references for [] hfiles 2023-07-24 18:10:54,755 DEBUG [rs(jenkins-hbase4.apache.org,43449,1690222239527)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(137): ... Flush Snapshotting region Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. completed. 2023-07-24 18:10:54,755 DEBUG [rs(jenkins-hbase4.apache.org,43449,1690222239527)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(140): Closing snapshot operation on Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:54,756 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(312): Completed 1/1 local region snapshots. 2023-07-24 18:10:54,756 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(314): Completed 1 local region snapshots. 2023-07-24 18:10:54,756 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(345): cancelling 0 tasks for snapshot jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,756 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-24 18:10:54,756 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,43449,1690222239527' in zk 2023-07-24 18:10:54,758 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,758 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,758 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-24 18:10:54,758 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-24 18:10:54,758 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-24 18:10:54,758 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 18:10:54,758 DEBUG [member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-24 18:10:54,759 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-24 18:10:54,759 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-24 18:10:54,760 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 18:10:54,760 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,760 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:54,760 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:54,761 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:54,761 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-24 18:10:54,761 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 18:10:54,761 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,762 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:54,762 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:54,762 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:54,764 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'Group_testCloneSnapshot_snap' member 'jenkins-hbase4.apache.org,43449,1690222239527': 2023-07-24 18:10:54,764 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,43449,1690222239527' released barrier for procedure'Group_testCloneSnapshot_snap', counting down latch. Waiting for 0 more 2023-07-24 18:10:54,764 INFO [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'Group_testCloneSnapshot_snap' execution completed 2023-07-24 18:10:54,764 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-07-24 18:10:54,764 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-07-24 18:10:54,764 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:Group_testCloneSnapshot_snap 2023-07-24 18:10:54,764 INFO [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure Group_testCloneSnapshot_snapincluding nodes /hbase/online-snapshot/acquired /hbase/online-snapshot/reached /hbase/online-snapshot/abort 2023-07-24 18:10:54,765 DEBUG [Listener at localhost/44627] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-24 18:10:54,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-24 18:10:54,768 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 18:10:54,768 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 18:10:54,768 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 18:10:54,768 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 18:10:54,768 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,769 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,768 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,769 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,769 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-24 18:10:54,769 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-24 18:10:54,769 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 18:10:54,769 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 18:10:54,769 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:54,769 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 18:10:54,769 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:54,769 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:54,769 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 18:10:54,769 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:54,770 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-24 18:10:54,769 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-24 18:10:54,770 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,770 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,770 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,770 DEBUG [Listener at localhost/44627] client.HBaseAdmin(2428): (#2) Sleeping: 200ms while waiting for snapshot completion. 2023-07-24 18:10:54,770 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,771 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:54,770 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 18:10:54,771 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:54,771 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-24 18:10:54,771 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:54,772 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 18:10:54,772 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,772 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:54,772 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:54,773 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:54,773 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-24 18:10:54,773 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-24 18:10:54,773 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,774 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,774 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:54,774 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:54,774 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:54,775 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:54,776 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:54,776 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:54,780 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 18:10:54,780 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 18:10:54,780 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 18:10:54,780 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 18:10:54,780 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:54,780 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 18:10:54,780 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-24 18:10:54,780 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:54,781 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,780 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 18:10:54,781 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 18:10:54,781 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:54,780 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 18:10:54,780 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 18:10:54,780 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-24 18:10:54,781 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 18:10:54,781 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:54,781 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:54,781 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for Group_testCloneSnapshot_snap 2023-07-24 18:10:54,782 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotManifest(484): Convert to Single Snapshot Manifest for Group_testCloneSnapshot_snap 2023-07-24 18:10:54,782 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 18:10:54,782 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:54,781 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:54,781 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:54,781 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:54,781 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-24 18:10:54,784 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,784 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,784 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:54,781 DEBUG [(jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-24 18:10:54,784 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,784 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 18:10:54,785 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:54,784 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:54,786 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:54,786 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:54,786 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,786 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-24 18:10:54,786 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:54,786 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotManifestV1(126): No regions under directory:hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,786 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,826 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotDescriptionUtils(404): Sentinel is done, just moving the snapshot from hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.hbase-snapshot/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,861 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(229): Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed 2023-07-24 18:10:54,861 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(246): Launching cleanup of working dir:hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,862 ERROR [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(251): Couldn't delete snapshot working directory:hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-24 18:10:54,862 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(257): Table snapshot journal : Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot at 1690222254657Consolidate snapshot: Group_testCloneSnapshot_snap at 1690222254782 (+125 ms)Loading Region manifests for Group_testCloneSnapshot_snap at 1690222254782Writing data manifest for Group_testCloneSnapshot_snap at 1690222254796 (+14 ms)Verifying snapshot: Group_testCloneSnapshot_snap at 1690222254815 (+19 ms)Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed at 1690222254861 (+46 ms) 2023-07-24 18:10:54,864 DEBUG [PEWorker-3] locking.LockProcedure(242): UNLOCKED pid=93, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 18:10:54,866 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED in 205 msec 2023-07-24 18:10:54,970 DEBUG [Listener at localhost/44627] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-24 18:10:54,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-24 18:10:54,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] snapshot.SnapshotManager(401): Snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' has completed, notifying client. 2023-07-24 18:10:54,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint(486): Pre-moving table Group_testCloneSnapshot_clone to RSGroup default 2023-07-24 18:10:54,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:54,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:54,992 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(742): TableDescriptor of table {} not found. Skipping the region movement of this table. 2023-07-24 18:10:55,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CLONE_SNAPSHOT_PRE_OPERATION; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690222254621 type: FLUSH version: 2 ttl: 0 ) 2023-07-24 18:10:55,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] snapshot.SnapshotManager(750): Clone snapshot=Group_testCloneSnapshot_snap as table=Group_testCloneSnapshot_clone 2023-07-24 18:10:55,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 18:10:55,038 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot_clone/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:55,044 INFO [PEWorker-2] snapshot.RestoreSnapshotHelper(177): starting restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690222254621 type: FLUSH version: 2 ttl: 0 2023-07-24 18:10:55,045 DEBUG [PEWorker-2] snapshot.RestoreSnapshotHelper(785): get table regions: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot_clone 2023-07-24 18:10:55,046 INFO [PEWorker-2] snapshot.RestoreSnapshotHelper(239): region to add: 015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:55,046 INFO [PEWorker-2] snapshot.RestoreSnapshotHelper(585): clone region=015fbcc6e3e4b8e7427f2ee47ae9f5cb as eb6047d7a95deda2972f6f17844d7209 in snapshot Group_testCloneSnapshot_snap 2023-07-24 18:10:55,047 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => eb6047d7a95deda2972f6f17844d7209, NAME => 'Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot_clone', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:55,058 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:55,058 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1604): Closing eb6047d7a95deda2972f6f17844d7209, disabling compactions & flushes 2023-07-24 18:10:55,058 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:55,058 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:55,058 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. after waiting 0 ms 2023-07-24 18:10:55,058 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:55,058 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:55,058 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for eb6047d7a95deda2972f6f17844d7209: 2023-07-24 18:10:55,059 INFO [PEWorker-2] snapshot.RestoreSnapshotHelper(266): finishing restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690222254621 type: FLUSH version: 2 ttl: 0 2023-07-24 18:10:55,059 INFO [PEWorker-2] procedure.CloneSnapshotProcedure$1(421): Clone snapshot=Group_testCloneSnapshot_snap on table=Group_testCloneSnapshot_clone completed! 2023-07-24 18:10:55,062 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690222255062"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222255062"}]},"ts":"1690222255062"} 2023-07-24 18:10:55,064 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:55,065 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222255064"}]},"ts":"1690222255064"} 2023-07-24 18:10:55,066 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLING in hbase:meta 2023-07-24 18:10:55,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:55,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:55,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:55,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:55,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 18:10:55,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:55,071 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=eb6047d7a95deda2972f6f17844d7209, ASSIGN}] 2023-07-24 18:10:55,073 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=eb6047d7a95deda2972f6f17844d7209, ASSIGN 2023-07-24 18:10:55,074 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=eb6047d7a95deda2972f6f17844d7209, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43449,1690222239527; forceNewPlan=false, retain=false 2023-07-24 18:10:55,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 18:10:55,225 INFO [jenkins-hbase4:34677] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:55,226 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=eb6047d7a95deda2972f6f17844d7209, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:55,226 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690222255226"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222255226"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222255226"}]},"ts":"1690222255226"} 2023-07-24 18:10:55,228 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure eb6047d7a95deda2972f6f17844d7209, server=jenkins-hbase4.apache.org,43449,1690222239527}] 2023-07-24 18:10:55,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 18:10:55,383 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:55,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb6047d7a95deda2972f6f17844d7209, NAME => 'Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:55,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot_clone eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:55,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:55,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:55,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:55,386 INFO [StoreOpener-eb6047d7a95deda2972f6f17844d7209-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:55,387 DEBUG [StoreOpener-eb6047d7a95deda2972f6f17844d7209-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209/test 2023-07-24 18:10:55,387 DEBUG [StoreOpener-eb6047d7a95deda2972f6f17844d7209-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209/test 2023-07-24 18:10:55,388 INFO [StoreOpener-eb6047d7a95deda2972f6f17844d7209-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb6047d7a95deda2972f6f17844d7209 columnFamilyName test 2023-07-24 18:10:55,388 INFO [StoreOpener-eb6047d7a95deda2972f6f17844d7209-1] regionserver.HStore(310): Store=eb6047d7a95deda2972f6f17844d7209/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:55,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:55,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:55,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:55,394 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:55,395 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb6047d7a95deda2972f6f17844d7209; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10440380320, jitterRate=-0.027663812041282654}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:55,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb6047d7a95deda2972f6f17844d7209: 2023-07-24 18:10:55,396 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209., pid=96, masterSystemTime=1690222255379 2023-07-24 18:10:55,397 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:55,397 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:55,398 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=eb6047d7a95deda2972f6f17844d7209, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:55,398 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690222255398"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222255398"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222255398"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222255398"}]},"ts":"1690222255398"} 2023-07-24 18:10:55,402 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-24 18:10:55,402 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure eb6047d7a95deda2972f6f17844d7209, server=jenkins-hbase4.apache.org,43449,1690222239527 in 172 msec 2023-07-24 18:10:55,405 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-24 18:10:55,405 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=eb6047d7a95deda2972f6f17844d7209, ASSIGN in 332 msec 2023-07-24 18:10:55,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222255406"}]},"ts":"1690222255406"} 2023-07-24 18:10:55,407 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLED in hbase:meta 2023-07-24 18:10:55,411 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690222254621 type: FLUSH version: 2 ttl: 0 ) in 411 msec 2023-07-24 18:10:55,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 18:10:55,618 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: MODIFY, Table Name: default:Group_testCloneSnapshot_clone, procId: 94 completed 2023-07-24 18:10:55,619 INFO [Listener at localhost/44627] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot 2023-07-24 18:10:55,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCloneSnapshot 2023-07-24 18:10:55,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:10:55,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 18:10:55,623 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222255623"}]},"ts":"1690222255623"} 2023-07-24 18:10:55,624 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLING in hbase:meta 2023-07-24 18:10:55,626 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot to state=DISABLING 2023-07-24 18:10:55,627 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=015fbcc6e3e4b8e7427f2ee47ae9f5cb, UNASSIGN}] 2023-07-24 18:10:55,630 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=015fbcc6e3e4b8e7427f2ee47ae9f5cb, UNASSIGN 2023-07-24 18:10:55,631 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=015fbcc6e3e4b8e7427f2ee47ae9f5cb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:55,631 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222255631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222255631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222255631"}]},"ts":"1690222255631"} 2023-07-24 18:10:55,632 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; CloseRegionProcedure 015fbcc6e3e4b8e7427f2ee47ae9f5cb, server=jenkins-hbase4.apache.org,43449,1690222239527}] 2023-07-24 18:10:55,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 18:10:55,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:55,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 015fbcc6e3e4b8e7427f2ee47ae9f5cb, disabling compactions & flushes 2023-07-24 18:10:55,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:55,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:55,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. after waiting 0 ms 2023-07-24 18:10:55,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:55,789 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb/recovered.edits/5.seqid, newMaxSeqId=5, maxSeqId=1 2023-07-24 18:10:55,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb. 2023-07-24 18:10:55,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 015fbcc6e3e4b8e7427f2ee47ae9f5cb: 2023-07-24 18:10:55,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:55,792 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=015fbcc6e3e4b8e7427f2ee47ae9f5cb, regionState=CLOSED 2023-07-24 18:10:55,792 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1690222255792"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222255792"}]},"ts":"1690222255792"} 2023-07-24 18:10:55,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-24 18:10:55,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; CloseRegionProcedure 015fbcc6e3e4b8e7427f2ee47ae9f5cb, server=jenkins-hbase4.apache.org,43449,1690222239527 in 162 msec 2023-07-24 18:10:55,797 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-24 18:10:55,798 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=015fbcc6e3e4b8e7427f2ee47ae9f5cb, UNASSIGN in 169 msec 2023-07-24 18:10:55,798 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222255798"}]},"ts":"1690222255798"} 2023-07-24 18:10:55,800 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLED in hbase:meta 2023-07-24 18:10:55,806 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot to state=DISABLED 2023-07-24 18:10:55,808 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot in 188 msec 2023-07-24 18:10:55,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 18:10:55,926 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot, procId: 97 completed 2023-07-24 18:10:55,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCloneSnapshot 2023-07-24 18:10:55,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:10:55,930 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=100, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:10:55,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot' from rsgroup 'default' 2023-07-24 18:10:55,931 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=100, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:10:55,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:55,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:55,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:55,935 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:55,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 18:10:55,937 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb/recovered.edits, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb/test] 2023-07-24 18:10:55,943 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb/recovered.edits/5.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb/recovered.edits/5.seqid 2023-07-24 18:10:55,945 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot/015fbcc6e3e4b8e7427f2ee47ae9f5cb 2023-07-24 18:10:55,945 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-24 18:10:55,948 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=100, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:10:55,951 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot from hbase:meta 2023-07-24 18:10:55,953 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot' descriptor. 2023-07-24 18:10:55,955 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=100, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:10:55,955 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot' from region states. 2023-07-24 18:10:55,955 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222255955"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:55,957 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:55,957 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 015fbcc6e3e4b8e7427f2ee47ae9f5cb, NAME => 'Group_testCloneSnapshot,,1690222253994.015fbcc6e3e4b8e7427f2ee47ae9f5cb.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:55,957 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot' as deleted. 2023-07-24 18:10:55,957 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222255957"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:55,958 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot state from META 2023-07-24 18:10:55,961 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=100, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:10:55,962 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot in 34 msec 2023-07-24 18:10:56,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-24 18:10:56,037 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot, procId: 100 completed 2023-07-24 18:10:56,038 INFO [Listener at localhost/44627] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot_clone 2023-07-24 18:10:56,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCloneSnapshot_clone 2023-07-24 18:10:56,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:10:56,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-24 18:10:56,042 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222256042"}]},"ts":"1690222256042"} 2023-07-24 18:10:56,044 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLING in hbase:meta 2023-07-24 18:10:56,045 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot_clone to state=DISABLING 2023-07-24 18:10:56,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=eb6047d7a95deda2972f6f17844d7209, UNASSIGN}] 2023-07-24 18:10:56,048 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=eb6047d7a95deda2972f6f17844d7209, UNASSIGN 2023-07-24 18:10:56,049 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=eb6047d7a95deda2972f6f17844d7209, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:56,049 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690222256049"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222256049"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222256049"}]},"ts":"1690222256049"} 2023-07-24 18:10:56,050 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=102, state=RUNNABLE; CloseRegionProcedure eb6047d7a95deda2972f6f17844d7209, server=jenkins-hbase4.apache.org,43449,1690222239527}] 2023-07-24 18:10:56,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-24 18:10:56,202 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:56,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb6047d7a95deda2972f6f17844d7209, disabling compactions & flushes 2023-07-24 18:10:56,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:56,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:56,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. after waiting 0 ms 2023-07-24 18:10:56,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:56,209 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:56,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209. 2023-07-24 18:10:56,209 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb6047d7a95deda2972f6f17844d7209: 2023-07-24 18:10:56,211 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:56,212 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=eb6047d7a95deda2972f6f17844d7209, regionState=CLOSED 2023-07-24 18:10:56,212 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1690222256212"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222256212"}]},"ts":"1690222256212"} 2023-07-24 18:10:56,219 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=102 2023-07-24 18:10:56,219 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=102, state=SUCCESS; CloseRegionProcedure eb6047d7a95deda2972f6f17844d7209, server=jenkins-hbase4.apache.org,43449,1690222239527 in 164 msec 2023-07-24 18:10:56,222 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-24 18:10:56,222 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=eb6047d7a95deda2972f6f17844d7209, UNASSIGN in 173 msec 2023-07-24 18:10:56,222 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222256222"}]},"ts":"1690222256222"} 2023-07-24 18:10:56,224 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLED in hbase:meta 2023-07-24 18:10:56,226 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot_clone to state=DISABLED 2023-07-24 18:10:56,229 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone in 189 msec 2023-07-24 18:10:56,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-24 18:10:56,345 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot_clone, procId: 101 completed 2023-07-24 18:10:56,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCloneSnapshot_clone 2023-07-24 18:10:56,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:10:56,351 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=104, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:10:56,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot_clone' from rsgroup 'default' 2023-07-24 18:10:56,352 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=104, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:10:56,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:56,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:56,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:56,356 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:56,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 18:10:56,360 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209/recovered.edits, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209/test] 2023-07-24 18:10:56,366 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209/recovered.edits/4.seqid 2023-07-24 18:10:56,368 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/default/Group_testCloneSnapshot_clone/eb6047d7a95deda2972f6f17844d7209 2023-07-24 18:10:56,368 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot_clone regions 2023-07-24 18:10:56,372 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=104, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:10:56,374 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot_clone from hbase:meta 2023-07-24 18:10:56,376 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot_clone' descriptor. 2023-07-24 18:10:56,381 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=104, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:10:56,381 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot_clone' from region states. 2023-07-24 18:10:56,381 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222256381"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:56,386 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:56,387 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => eb6047d7a95deda2972f6f17844d7209, NAME => 'Group_testCloneSnapshot_clone,,1690222253994.eb6047d7a95deda2972f6f17844d7209.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:56,387 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot_clone' as deleted. 2023-07-24 18:10:56,387 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222256387"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:56,392 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot_clone state from META 2023-07-24 18:10:56,394 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=104, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:10:56,395 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone in 48 msec 2023-07-24 18:10:56,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-24 18:10:56,461 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot_clone, procId: 104 completed 2023-07-24 18:10:56,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:56,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:56,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:56,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:56,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:56,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:56,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:56,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:56,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:56,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:56,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:56,479 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:56,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:56,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:56,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:56,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:56,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:56,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:56,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:56,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:56,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:56,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 563 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223456493, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:56,494 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:56,496 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:56,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:56,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:56,497 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:56,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:56,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:56,515 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=514 (was 511) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,37467,1690222246245' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1173938704_17 at /127.0.0.1:42232 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1e045fb3-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: (jenkins-hbase4.apache.org,34677,1690222237492)-proc-coordinator-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,35913,1690222239741' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1707344554_17 at /127.0.0.1:38374 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,41915,1690222243305' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,43449,1690222239527' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1707344554_17 at /127.0.0.1:52522 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=807 (was 812), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=576 (was 576), ProcessCount=177 (was 177), AvailableMemoryMB=5401 (was 5403) 2023-07-24 18:10:56,515 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 18:10:56,534 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=514, OpenFileDescriptor=807, MaxFileDescriptor=60000, SystemLoadAverage=576, ProcessCount=177, AvailableMemoryMB=5399 2023-07-24 18:10:56,534 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-24 18:10:56,534 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:56,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:56,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:56,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:56,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:56,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:56,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:56,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:56,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:56,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:56,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:56,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:56,552 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:56,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:56,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:56,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:56,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:56,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:56,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:56,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:56,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:56,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:56,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 591 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223456562, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:56,563 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:56,564 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:56,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:56,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:56,565 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:56,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:56,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:56,566 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(141): testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:56,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:56,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:56,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup appInfo 2023-07-24 18:10:56,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:56,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:56,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:56,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:56,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:56,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:56,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:56,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35913] to rsgroup appInfo 2023-07-24 18:10:56,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:56,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:56,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:56,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:56,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 18:10:56,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35913,1690222239741] are moved back to default 2023-07-24 18:10:56,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-24 18:10:56,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:56,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:56,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:56,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=appInfo 2023-07-24 18:10:56,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:56,597 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-24 18:10:56,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.ServerManager(636): Server jenkins-hbase4.apache.org,35913,1690222239741 added to draining server list. 2023-07-24 18:10:56,599 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/draining/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:56,600 WARN [zk-event-processor-pool-0] master.ServerManager(632): Server jenkins-hbase4.apache.org,35913,1690222239741 is already in the draining server list.Ignoring request to add it again. 2023-07-24 18:10:56,600 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(92): Draining RS node created, adding to list [jenkins-hbase4.apache.org,35913,1690222239741] 2023-07-24 18:10:56,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_ns', hbase.rsgroup.name => 'appInfo'} 2023-07-24 18:10:56,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=105, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_ns 2023-07-24 18:10:56,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-24 18:10:56,610 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:56,619 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=105, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns in 15 msec 2023-07-24 18:10:56,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-24 18:10:56,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:56,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:56,711 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=106, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:56,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 106 2023-07-24 18:10:56,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 18:10:56,727 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=106, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers exec-time=18 msec 2023-07-24 18:10:56,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 18:10:56,819 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 106 failed with No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to 2023-07-24 18:10:56,819 DEBUG [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(162): create table error org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.util.FutureUtils.setStackTrace(FutureUtils.java:130) at org.apache.hadoop.hbase.util.FutureUtils.rethrow(FutureUtils.java:149) at org.apache.hadoop.hbase.util.FutureUtils.get(FutureUtils.java:186) at org.apache.hadoop.hbase.client.Admin.createTable(Admin.java:302) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testCreateWhenRsgroupNoOnlineServers(TestRSGroupsBasics.java:159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) at --------Future.get--------(Unknown Source) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.validateRSGroup(RSGroupAdminEndpoint.java:540) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.moveTableToValidRSGroup(RSGroupAdminEndpoint.java:529) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateTableAction(RSGroupAdminEndpoint.java:501) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:371) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTableAction(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.preCreate(CreateTableProcedure.java:267) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:93) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-24 18:10:56,825 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/draining/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:56,826 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-24 18:10:56,826 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(109): Draining RS node deleted, removing from list [jenkins-hbase4.apache.org,35913,1690222239741] 2023-07-24 18:10:56,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:56,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:56,831 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:56,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 107 2023-07-24 18:10:56,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 18:10:56,833 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:56,833 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:56,834 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:56,834 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:56,836 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:56,838 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:56,838 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004 empty. 2023-07-24 18:10:56,839 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:56,839 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-24 18:10:56,863 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:56,864 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(7675): creating {ENCODED => 39f0df15be5eb78b0345aa42fa41e004, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:10:56,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:56,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1604): Closing 39f0df15be5eb78b0345aa42fa41e004, disabling compactions & flushes 2023-07-24 18:10:56,880 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:56,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:56,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. after waiting 0 ms 2023-07-24 18:10:56,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:56,880 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:56,880 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1558): Region close journal for 39f0df15be5eb78b0345aa42fa41e004: 2023-07-24 18:10:56,883 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:56,884 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222256883"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222256883"}]},"ts":"1690222256883"} 2023-07-24 18:10:56,885 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:56,886 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:56,886 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222256886"}]},"ts":"1690222256886"} 2023-07-24 18:10:56,887 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLING in hbase:meta 2023-07-24 18:10:56,891 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=39f0df15be5eb78b0345aa42fa41e004, ASSIGN}] 2023-07-24 18:10:56,893 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=39f0df15be5eb78b0345aa42fa41e004, ASSIGN 2023-07-24 18:10:56,899 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=39f0df15be5eb78b0345aa42fa41e004, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35913,1690222239741; forceNewPlan=false, retain=false 2023-07-24 18:10:56,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 18:10:57,051 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=39f0df15be5eb78b0345aa42fa41e004, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:57,051 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222257051"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222257051"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222257051"}]},"ts":"1690222257051"} 2023-07-24 18:10:57,053 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; OpenRegionProcedure 39f0df15be5eb78b0345aa42fa41e004, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:57,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 18:10:57,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:57,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 39f0df15be5eb78b0345aa42fa41e004, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:57,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testCreateWhenRsgroupNoOnlineServers 39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:57,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,210 INFO [StoreOpener-39f0df15be5eb78b0345aa42fa41e004-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,212 DEBUG [StoreOpener-39f0df15be5eb78b0345aa42fa41e004-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004/f 2023-07-24 18:10:57,212 DEBUG [StoreOpener-39f0df15be5eb78b0345aa42fa41e004-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004/f 2023-07-24 18:10:57,212 INFO [StoreOpener-39f0df15be5eb78b0345aa42fa41e004-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 39f0df15be5eb78b0345aa42fa41e004 columnFamilyName f 2023-07-24 18:10:57,213 INFO [StoreOpener-39f0df15be5eb78b0345aa42fa41e004-1] regionserver.HStore(310): Store=39f0df15be5eb78b0345aa42fa41e004/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:57,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:57,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 39f0df15be5eb78b0345aa42fa41e004; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10958481440, jitterRate=0.020588114857673645}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:57,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 39f0df15be5eb78b0345aa42fa41e004: 2023-07-24 18:10:57,222 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004., pid=109, masterSystemTime=1690222257204 2023-07-24 18:10:57,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:57,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:57,224 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=39f0df15be5eb78b0345aa42fa41e004, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:57,225 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222257224"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222257224"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222257224"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222257224"}]},"ts":"1690222257224"} 2023-07-24 18:10:57,228 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-24 18:10:57,228 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; OpenRegionProcedure 39f0df15be5eb78b0345aa42fa41e004, server=jenkins-hbase4.apache.org,35913,1690222239741 in 173 msec 2023-07-24 18:10:57,230 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-24 18:10:57,230 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=39f0df15be5eb78b0345aa42fa41e004, ASSIGN in 337 msec 2023-07-24 18:10:57,231 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:57,231 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222257231"}]},"ts":"1690222257231"} 2023-07-24 18:10:57,232 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLED in hbase:meta 2023-07-24 18:10:57,234 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:57,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 406 msec 2023-07-24 18:10:57,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-24 18:10:57,436 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 107 completed 2023-07-24 18:10:57,437 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:57,442 INFO [Listener at localhost/44627] client.HBaseAdmin$15(890): Started disable of Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 18:10:57,449 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222257448"}]},"ts":"1690222257448"} 2023-07-24 18:10:57,450 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLING in hbase:meta 2023-07-24 18:10:57,452 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLING 2023-07-24 18:10:57,453 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=39f0df15be5eb78b0345aa42fa41e004, UNASSIGN}] 2023-07-24 18:10:57,454 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=39f0df15be5eb78b0345aa42fa41e004, UNASSIGN 2023-07-24 18:10:57,454 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=39f0df15be5eb78b0345aa42fa41e004, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:57,454 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222257454"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222257454"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222257454"}]},"ts":"1690222257454"} 2023-07-24 18:10:57,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 39f0df15be5eb78b0345aa42fa41e004, server=jenkins-hbase4.apache.org,35913,1690222239741}] 2023-07-24 18:10:57,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 18:10:57,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 39f0df15be5eb78b0345aa42fa41e004, disabling compactions & flushes 2023-07-24 18:10:57,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:57,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:57,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. after waiting 0 ms 2023-07-24 18:10:57,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:57,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:57,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004. 2023-07-24 18:10:57,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 39f0df15be5eb78b0345aa42fa41e004: 2023-07-24 18:10:57,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,620 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=39f0df15be5eb78b0345aa42fa41e004, regionState=CLOSED 2023-07-24 18:10:57,620 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222257620"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222257620"}]},"ts":"1690222257620"} 2023-07-24 18:10:57,623 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-24 18:10:57,623 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 39f0df15be5eb78b0345aa42fa41e004, server=jenkins-hbase4.apache.org,35913,1690222239741 in 166 msec 2023-07-24 18:10:57,628 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-24 18:10:57,628 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=39f0df15be5eb78b0345aa42fa41e004, UNASSIGN in 170 msec 2023-07-24 18:10:57,629 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222257629"}]},"ts":"1690222257629"} 2023-07-24 18:10:57,630 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLED in hbase:meta 2023-07-24 18:10:57,634 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLED 2023-07-24 18:10:57,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 193 msec 2023-07-24 18:10:57,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 18:10:57,751 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 110 completed 2023-07-24 18:10:57,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,755 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from rsgroup 'appInfo' 2023-07-24 18:10:57,756 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:57,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:57,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:57,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:57,760 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,762 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004/f, FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004/recovered.edits] 2023-07-24 18:10:57,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-24 18:10:57,768 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004/recovered.edits/4.seqid to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004/recovered.edits/4.seqid 2023-07-24 18:10:57,769 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/39f0df15be5eb78b0345aa42fa41e004 2023-07-24 18:10:57,769 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-24 18:10:57,772 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,774 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_ns:testCreateWhenRsgroupNoOnlineServers from hbase:meta 2023-07-24 18:10:57,776 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' descriptor. 2023-07-24 18:10:57,781 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,781 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from region states. 2023-07-24 18:10:57,781 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222257781"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:57,783 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:57,783 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 39f0df15be5eb78b0345aa42fa41e004, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1690222256828.39f0df15be5eb78b0345aa42fa41e004.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:57,783 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_ns:testCreateWhenRsgroupNoOnlineServers' as deleted. 2023-07-24 18:10:57,783 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222257783"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:57,785 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_ns:testCreateWhenRsgroupNoOnlineServers state from META 2023-07-24 18:10:57,787 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:10:57,788 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 35 msec 2023-07-24 18:10:57,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-24 18:10:57,864 INFO [Listener at localhost/44627] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 113 completed 2023-07-24 18:10:57,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_ns 2023-07-24 18:10:57,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 18:10:57,871 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 18:10:57,873 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 18:10:57,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 18:10:57,875 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 18:10:57,877 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_ns 2023-07-24 18:10:57,877 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:57,877 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 18:10:57,879 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 18:10:57,880 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns in 11 msec 2023-07-24 18:10:57,901 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:10:57,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 18:10:57,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:57,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:57,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:57,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:57,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:57,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:57,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:57,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:57,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:57,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:57,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:57,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:57,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:57,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:57,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:57,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35913] to rsgroup default 2023-07-24 18:10:57,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:57,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-24 18:10:57,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:57,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-24 18:10:57,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35913,1690222239741] are moved back to appInfo 2023-07-24 18:10:57,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-24 18:10:57,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:58,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup appInfo 2023-07-24 18:10:58,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:58,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:58,007 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:58,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:58,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:58,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:58,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:58,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:58,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:58,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 693 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223458019, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:58,020 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:58,022 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:58,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,023 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:58,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:58,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:58,045 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=515 (was 514) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-372597080_17 at /127.0.0.1:52522 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3122a34e-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=807 (was 807), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=576 (was 576), ProcessCount=177 (was 177), AvailableMemoryMB=5308 (was 5399) 2023-07-24 18:10:58,045 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-24 18:10:58,064 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=515, OpenFileDescriptor=807, MaxFileDescriptor=60000, SystemLoadAverage=576, ProcessCount=177, AvailableMemoryMB=5308 2023-07-24 18:10:58,065 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-24 18:10:58,065 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testBasicStartUp 2023-07-24 18:10:58,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:58,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:58,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:58,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:58,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:58,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:58,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:58,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:58,082 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:58,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:58,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:58,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:58,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:58,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:58,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:58,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 721 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223458099, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:58,100 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:58,102 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:58,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,104 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:58,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:58,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:58,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:58,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:58,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:58,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:58,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:58,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:58,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:58,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:58,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:58,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:58,129 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:58,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:58,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:58,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:58,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:58,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:58,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:58,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 751 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223458142, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:58,143 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:58,145 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:58,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,146 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:58,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:58,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:58,172 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=516 (was 515) - Thread LEAK? -, OpenFileDescriptor=807 (was 807), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=576 (was 576), ProcessCount=177 (was 177), AvailableMemoryMB=5306 (was 5308) 2023-07-24 18:10:58,172 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-24 18:10:58,197 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=516, OpenFileDescriptor=807, MaxFileDescriptor=60000, SystemLoadAverage=576, ProcessCount=177, AvailableMemoryMB=5303 2023-07-24 18:10:58,197 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-24 18:10:58,197 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testRSGroupsWithHBaseQuota 2023-07-24 18:10:58,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:58,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:58,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:58,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:58,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:58,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:58,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:58,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:58,218 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:58,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:58,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:58,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:58,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:58,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34677] to rsgroup master 2023-07-24 18:10:58,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:58,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] ipc.CallRunner(144): callId: 779 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:54766 deadline: 1690223458234, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. 2023-07-24 18:10:58,235 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor63.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34677 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:58,237 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:58,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:58,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:58,239 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35913, jenkins-hbase4.apache.org:37467, jenkins-hbase4.apache.org:41915, jenkins-hbase4.apache.org:43449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:58,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:58,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34677] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:58,240 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-24 18:10:58,240 INFO [Listener at localhost/44627] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 18:10:58,241 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0babe21e to 127.0.0.1:59012 2023-07-24 18:10:58,241 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,241 DEBUG [Listener at localhost/44627] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 18:10:58,241 DEBUG [Listener at localhost/44627] util.JVMClusterUtil(257): Found active master hash=1895294886, stopped=false 2023-07-24 18:10:58,241 DEBUG [Listener at localhost/44627] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:10:58,241 DEBUG [Listener at localhost/44627] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:10:58,241 INFO [Listener at localhost/44627] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:58,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:58,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:58,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:58,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:58,244 INFO [Listener at localhost/44627] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 18:10:58,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:58,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:58,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:58,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:58,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:58,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:58,250 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x439cd75f to 127.0.0.1:59012 2023-07-24 18:10:58,251 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,251 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:58,251 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43449,1690222239527' ***** 2023-07-24 18:10:58,251 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:10:58,251 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35913,1690222239741' ***** 2023-07-24 18:10:58,251 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:10:58,251 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:58,252 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41915,1690222243305' ***** 2023-07-24 18:10:58,252 INFO [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:58,252 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:10:58,255 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37467,1690222246245' ***** 2023-07-24 18:10:58,255 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:10:58,255 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:58,258 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:58,266 INFO [RS:3;jenkins-hbase4:41915] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4988aa8e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:58,266 INFO [RS:0;jenkins-hbase4:43449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2b321cdc{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:58,266 INFO [RS:4;jenkins-hbase4:37467] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7af1f8e9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:58,266 INFO [RS:1;jenkins-hbase4:35913] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6b2e3c5c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:58,266 INFO [RS:0;jenkins-hbase4:43449] server.AbstractConnector(383): Stopped ServerConnector@61e8112f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:58,266 INFO [RS:3;jenkins-hbase4:41915] server.AbstractConnector(383): Stopped ServerConnector@4ba2fb0a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:58,267 INFO [RS:0;jenkins-hbase4:43449] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:58,266 INFO [RS:4;jenkins-hbase4:37467] server.AbstractConnector(383): Stopped ServerConnector@6a64c925{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:58,267 INFO [RS:1;jenkins-hbase4:35913] server.AbstractConnector(383): Stopped ServerConnector@74cb4810{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:58,267 INFO [RS:3;jenkins-hbase4:41915] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:58,267 INFO [RS:1;jenkins-hbase4:35913] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:58,267 INFO [RS:4;jenkins-hbase4:37467] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:58,271 INFO [RS:3;jenkins-hbase4:41915] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7f60de8e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:58,271 INFO [RS:0;jenkins-hbase4:43449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@fa9dc3d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:58,272 INFO [RS:1;jenkins-hbase4:35913] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4677833f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:58,273 INFO [RS:0;jenkins-hbase4:43449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6f1fe7bc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:58,272 INFO [RS:3;jenkins-hbase4:41915] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@50e6c6eb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:58,272 INFO [RS:4;jenkins-hbase4:37467] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5cbf4294{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:58,274 INFO [RS:1;jenkins-hbase4:35913] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3a4c486{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:58,275 INFO [RS:4;jenkins-hbase4:37467] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@497e6e88{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:58,275 INFO [RS:3;jenkins-hbase4:41915] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:10:58,275 INFO [RS:0;jenkins-hbase4:43449] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:10:58,276 INFO [RS:0;jenkins-hbase4:43449] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:10:58,276 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:10:58,276 INFO [RS:0;jenkins-hbase4:43449] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:10:58,276 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:10:58,276 INFO [RS:4;jenkins-hbase4:37467] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:10:58,276 INFO [RS:3;jenkins-hbase4:41915] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:10:58,276 INFO [RS:3;jenkins-hbase4:41915] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:10:58,276 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:10:58,276 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:58,276 INFO [RS:4;jenkins-hbase4:37467] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:10:58,276 DEBUG [RS:3;jenkins-hbase4:41915] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3bf03c53 to 127.0.0.1:59012 2023-07-24 18:10:58,276 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(3305): Received CLOSE for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:58,276 DEBUG [RS:3;jenkins-hbase4:41915] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,277 INFO [RS:3;jenkins-hbase4:41915] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:10:58,277 INFO [RS:3;jenkins-hbase4:41915] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:10:58,277 INFO [RS:3;jenkins-hbase4:41915] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:10:58,277 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 18:10:58,276 INFO [RS:4;jenkins-hbase4:37467] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:10:58,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f93db382913b37f9661cac1fd8ee01a9, disabling compactions & flushes 2023-07-24 18:10:58,277 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:58,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:58,280 DEBUG [RS:0;jenkins-hbase4:43449] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x55fcd743 to 127.0.0.1:59012 2023-07-24 18:10:58,280 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(3305): Received CLOSE for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:58,280 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 18:10:58,280 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:58,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3e0fb36cbe9750f5f2b47d078547932, disabling compactions & flushes 2023-07-24 18:10:58,280 DEBUG [RS:4;jenkins-hbase4:37467] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6fc2f1f5 to 127.0.0.1:59012 2023-07-24 18:10:58,280 DEBUG [RS:4;jenkins-hbase4:37467] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,280 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:58,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:10:58,280 DEBUG [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-24 18:10:58,280 DEBUG [RS:0;jenkins-hbase4:43449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,281 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 18:10:58,281 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1478): Online Regions={f93db382913b37f9661cac1fd8ee01a9=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.} 2023-07-24 18:10:58,280 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:58,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. after waiting 0 ms 2023-07-24 18:10:58,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:58,281 INFO [RS:1;jenkins-hbase4:35913] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:10:58,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f93db382913b37f9661cac1fd8ee01a9 1/1 column families, dataSize=15.26 KB heapSize=24.78 KB 2023-07-24 18:10:58,282 INFO [RS:1;jenkins-hbase4:35913] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:10:58,282 INFO [RS:1;jenkins-hbase4:35913] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:10:58,282 INFO [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:58,282 DEBUG [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1504): Waiting on f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:10:58,282 DEBUG [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 18:10:58,281 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:10:58,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:58,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. after waiting 0 ms 2023-07-24 18:10:58,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:58,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b3e0fb36cbe9750f5f2b47d078547932 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-24 18:10:58,281 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 18:10:58,283 DEBUG [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1478): Online Regions={b3e0fb36cbe9750f5f2b47d078547932=hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.} 2023-07-24 18:10:58,283 DEBUG [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1504): Waiting on b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:10:58,282 DEBUG [RS:1;jenkins-hbase4:35913] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3d166dae to 127.0.0.1:59012 2023-07-24 18:10:58,282 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:10:58,283 DEBUG [RS:1;jenkins-hbase4:35913] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,283 INFO [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35913,1690222239741; all regions closed. 2023-07-24 18:10:58,283 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:10:58,283 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:10:58,283 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:10:58,283 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=45.38 KB heapSize=72.89 KB 2023-07-24 18:10:58,302 DEBUG [RS:1;jenkins-hbase4:35913] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:10:58,302 INFO [RS:1;jenkins-hbase4:35913] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35913%2C1690222239741:(num 1690222241849) 2023-07-24 18:10:58,302 DEBUG [RS:1;jenkins-hbase4:35913] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,302 INFO [RS:1;jenkins-hbase4:35913] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:58,307 INFO [RS:1;jenkins-hbase4:35913] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:10:58,312 INFO [RS:1;jenkins-hbase4:35913] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:10:58,312 INFO [RS:1;jenkins-hbase4:35913] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:10:58,312 INFO [RS:1;jenkins-hbase4:35913] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:10:58,312 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:58,315 INFO [RS:1;jenkins-hbase4:35913] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35913 2023-07-24 18:10:58,320 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:58,320 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:58,321 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:58,321 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:58,320 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:58,320 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:58,320 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35913,1690222239741 2023-07-24 18:10:58,321 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:58,321 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:58,321 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35913,1690222239741] 2023-07-24 18:10:58,321 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35913,1690222239741; numProcessing=1 2023-07-24 18:10:58,324 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35913,1690222239741 already deleted, retry=false 2023-07-24 18:10:58,324 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35913,1690222239741 expired; onlineServers=3 2023-07-24 18:10:58,337 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:58,337 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:58,337 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:58,337 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:58,340 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=39.92 KB at sequenceid=148 (bloomFilter=false), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/info/f7f4dbb0133a4183b89b4fe6e9566541 2023-07-24 18:10:58,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=17 (bloomFilter=true), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/.tmp/info/2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:10:58,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=15.26 KB at sequenceid=73 (bloomFilter=true), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp/m/c5e564844d934f86b57f8f0aadc04422 2023-07-24 18:10:58,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:10:58,350 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7f4dbb0133a4183b89b4fe6e9566541 2023-07-24 18:10:58,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/.tmp/info/2628731f4d1b461e985c85e3adc2b46f as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:10:58,357 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c5e564844d934f86b57f8f0aadc04422 2023-07-24 18:10:58,358 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp/m/c5e564844d934f86b57f8f0aadc04422 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/c5e564844d934f86b57f8f0aadc04422 2023-07-24 18:10:58,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:10:58,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/2628731f4d1b461e985c85e3adc2b46f, entries=2, sequenceid=17, filesize=4.9 K 2023-07-24 18:10:58,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for b3e0fb36cbe9750f5f2b47d078547932 in 82ms, sequenceid=17, compaction requested=false 2023-07-24 18:10:58,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 18:10:58,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c5e564844d934f86b57f8f0aadc04422 2023-07-24 18:10:58,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/c5e564844d934f86b57f8f0aadc04422, entries=21, sequenceid=73, filesize=5.7 K 2023-07-24 18:10:58,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~15.26 KB/15630, heapSize ~24.77 KB/25360, currentSize=0 B/0 for f93db382913b37f9661cac1fd8ee01a9 in 84ms, sequenceid=73, compaction requested=false 2023-07-24 18:10:58,374 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=148 (bloomFilter=false), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/rep_barrier/3a5e22ad1da244f1a956859232c6e5f1 2023-07-24 18:10:58,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/recovered.edits/20.seqid, newMaxSeqId=20, maxSeqId=10 2023-07-24 18:10:58,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/recovered.edits/76.seqid, newMaxSeqId=76, maxSeqId=12 2023-07-24 18:10:58,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:58,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:10:58,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:10:58,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:58,378 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:58,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:10:58,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:10:58,382 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a5e22ad1da244f1a956859232c6e5f1 2023-07-24 18:10:58,399 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.73 KB at sequenceid=148 (bloomFilter=false), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/table/fde3e8b12951484eaef87586119cf207 2023-07-24 18:10:58,405 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fde3e8b12951484eaef87586119cf207 2023-07-24 18:10:58,406 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/info/f7f4dbb0133a4183b89b4fe6e9566541 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/f7f4dbb0133a4183b89b4fe6e9566541 2023-07-24 18:10:58,412 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7f4dbb0133a4183b89b4fe6e9566541 2023-07-24 18:10:58,412 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/f7f4dbb0133a4183b89b4fe6e9566541, entries=53, sequenceid=148, filesize=10.7 K 2023-07-24 18:10:58,413 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/rep_barrier/3a5e22ad1da244f1a956859232c6e5f1 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier/3a5e22ad1da244f1a956859232c6e5f1 2023-07-24 18:10:58,418 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a5e22ad1da244f1a956859232c6e5f1 2023-07-24 18:10:58,419 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier/3a5e22ad1da244f1a956859232c6e5f1, entries=16, sequenceid=148, filesize=6.7 K 2023-07-24 18:10:58,419 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/table/fde3e8b12951484eaef87586119cf207 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/fde3e8b12951484eaef87586119cf207 2023-07-24 18:10:58,425 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fde3e8b12951484eaef87586119cf207 2023-07-24 18:10:58,425 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/fde3e8b12951484eaef87586119cf207, entries=23, sequenceid=148, filesize=7.0 K 2023-07-24 18:10:58,426 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~45.38 KB/46474, heapSize ~72.84 KB/74592, currentSize=0 B/0 for 1588230740 in 143ms, sequenceid=148, compaction requested=false 2023-07-24 18:10:58,436 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/recovered.edits/151.seqid, newMaxSeqId=151, maxSeqId=18 2023-07-24 18:10:58,437 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:58,437 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:10:58,437 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:10:58,437 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 18:10:58,445 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:58,445 INFO [RS:1;jenkins-hbase4:35913] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35913,1690222239741; zookeeper connection closed. 2023-07-24 18:10:58,446 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35913-0x101988716b40002, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:58,446 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@44b56d27] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@44b56d27 2023-07-24 18:10:58,482 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43449,1690222239527; all regions closed. 2023-07-24 18:10:58,482 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41915,1690222243305; all regions closed. 2023-07-24 18:10:58,483 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37467,1690222246245; all regions closed. 2023-07-24 18:10:58,493 DEBUG [RS:3;jenkins-hbase4:41915] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:10:58,493 INFO [RS:3;jenkins-hbase4:41915] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41915%2C1690222243305.meta:.meta(num 1690222244481) 2023-07-24 18:10:58,493 DEBUG [RS:0;jenkins-hbase4:43449] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:10:58,493 INFO [RS:0;jenkins-hbase4:43449] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43449%2C1690222239527:(num 1690222241849) 2023-07-24 18:10:58,493 DEBUG [RS:0;jenkins-hbase4:43449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,493 INFO [RS:0;jenkins-hbase4:43449] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:58,493 INFO [RS:0;jenkins-hbase4:43449] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:10:58,493 INFO [RS:0;jenkins-hbase4:43449] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:10:58,493 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:58,493 DEBUG [RS:4;jenkins-hbase4:37467] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:10:58,494 INFO [RS:4;jenkins-hbase4:37467] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37467%2C1690222246245:(num 1690222246612) 2023-07-24 18:10:58,494 DEBUG [RS:4;jenkins-hbase4:37467] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,494 INFO [RS:4;jenkins-hbase4:37467] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:58,493 INFO [RS:0;jenkins-hbase4:43449] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:10:58,494 INFO [RS:0;jenkins-hbase4:43449] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:10:58,495 INFO [RS:0;jenkins-hbase4:43449] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43449 2023-07-24 18:10:58,495 INFO [RS:4;jenkins-hbase4:37467] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 18:10:58,495 INFO [RS:4;jenkins-hbase4:37467] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:10:58,495 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:58,495 INFO [RS:4;jenkins-hbase4:37467] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:10:58,495 INFO [RS:4;jenkins-hbase4:37467] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:10:58,499 INFO [RS:4;jenkins-hbase4:37467] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37467 2023-07-24 18:10:58,501 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:58,501 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:58,501 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:58,501 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:58,501 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:58,502 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37467,1690222246245 2023-07-24 18:10:58,502 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43449,1690222239527 2023-07-24 18:10:58,503 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37467,1690222246245] 2023-07-24 18:10:58,503 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37467,1690222246245; numProcessing=2 2023-07-24 18:10:58,504 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37467,1690222246245 already deleted, retry=false 2023-07-24 18:10:58,504 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37467,1690222246245 expired; onlineServers=2 2023-07-24 18:10:58,504 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43449,1690222239527] 2023-07-24 18:10:58,504 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43449,1690222239527; numProcessing=3 2023-07-24 18:10:58,507 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43449,1690222239527 already deleted, retry=false 2023-07-24 18:10:58,507 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43449,1690222239527 expired; onlineServers=1 2023-07-24 18:10:58,508 DEBUG [RS:3;jenkins-hbase4:41915] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:10:58,508 INFO [RS:3;jenkins-hbase4:41915] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41915%2C1690222243305:(num 1690222243724) 2023-07-24 18:10:58,509 DEBUG [RS:3;jenkins-hbase4:41915] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,509 INFO [RS:3;jenkins-hbase4:41915] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:58,509 INFO [RS:3;jenkins-hbase4:41915] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 18:10:58,509 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:58,510 INFO [RS:3;jenkins-hbase4:41915] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41915 2023-07-24 18:10:58,512 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41915,1690222243305 2023-07-24 18:10:58,512 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:58,514 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41915,1690222243305] 2023-07-24 18:10:58,514 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41915,1690222243305; numProcessing=4 2023-07-24 18:10:58,515 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41915,1690222243305 already deleted, retry=false 2023-07-24 18:10:58,515 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41915,1690222243305 expired; onlineServers=0 2023-07-24 18:10:58,515 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34677,1690222237492' ***** 2023-07-24 18:10:58,515 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 18:10:58,516 DEBUG [M:0;jenkins-hbase4:34677] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31caed9d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:58,516 INFO [M:0;jenkins-hbase4:34677] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:58,519 INFO [M:0;jenkins-hbase4:34677] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@633d5d38{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:10:58,520 INFO [M:0;jenkins-hbase4:34677] server.AbstractConnector(383): Stopped ServerConnector@6a462da6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:58,520 INFO [M:0;jenkins-hbase4:34677] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:58,521 INFO [M:0;jenkins-hbase4:34677] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@43edd3da{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:58,522 INFO [M:0;jenkins-hbase4:34677] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@826539b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:58,523 INFO [M:0;jenkins-hbase4:34677] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34677,1690222237492 2023-07-24 18:10:58,523 INFO [M:0;jenkins-hbase4:34677] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34677,1690222237492; all regions closed. 2023-07-24 18:10:58,523 DEBUG [M:0;jenkins-hbase4:34677] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:58,523 INFO [M:0;jenkins-hbase4:34677] master.HMaster(1491): Stopping master jetty server 2023-07-24 18:10:58,524 INFO [M:0;jenkins-hbase4:34677] server.AbstractConnector(383): Stopped ServerConnector@64a59ec9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:58,524 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:58,524 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:58,525 DEBUG [M:0;jenkins-hbase4:34677] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 18:10:58,525 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 18:10:58,525 DEBUG [M:0;jenkins-hbase4:34677] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 18:10:58,525 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:58,525 INFO [M:0;jenkins-hbase4:34677] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 18:10:58,525 INFO [M:0;jenkins-hbase4:34677] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 18:10:58,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222241383] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222241383,5,FailOnTimeoutGroup] 2023-07-24 18:10:58,525 INFO [M:0;jenkins-hbase4:34677] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 18:10:58,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222241383] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222241383,5,FailOnTimeoutGroup] 2023-07-24 18:10:58,525 DEBUG [M:0;jenkins-hbase4:34677] master.HMaster(1512): Stopping service threads 2023-07-24 18:10:58,526 INFO [M:0;jenkins-hbase4:34677] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 18:10:58,526 ERROR [M:0;jenkins-hbase4:34677] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-24 18:10:58,527 INFO [M:0;jenkins-hbase4:34677] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 18:10:58,527 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 18:10:58,527 DEBUG [M:0;jenkins-hbase4:34677] zookeeper.ZKUtil(398): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 18:10:58,527 WARN [M:0;jenkins-hbase4:34677] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 18:10:58,527 INFO [M:0;jenkins-hbase4:34677] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 18:10:58,528 INFO [M:0;jenkins-hbase4:34677] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 18:10:58,528 DEBUG [M:0;jenkins-hbase4:34677] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:10:58,528 INFO [M:0;jenkins-hbase4:34677] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:58,528 DEBUG [M:0;jenkins-hbase4:34677] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:58,528 DEBUG [M:0;jenkins-hbase4:34677] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:10:58,528 DEBUG [M:0;jenkins-hbase4:34677] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:58,528 INFO [M:0;jenkins-hbase4:34677] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=382.74 KB heapSize=456.54 KB 2023-07-24 18:10:58,552 INFO [M:0;jenkins-hbase4:34677] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=382.74 KB at sequenceid=844 (bloomFilter=true), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e38e113787d94d1a96a83aa49006b270 2023-07-24 18:10:58,561 DEBUG [M:0;jenkins-hbase4:34677] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/e38e113787d94d1a96a83aa49006b270 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e38e113787d94d1a96a83aa49006b270 2023-07-24 18:10:58,568 INFO [M:0;jenkins-hbase4:34677] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e38e113787d94d1a96a83aa49006b270, entries=114, sequenceid=844, filesize=26.1 K 2023-07-24 18:10:58,569 INFO [M:0;jenkins-hbase4:34677] regionserver.HRegion(2948): Finished flush of dataSize ~382.74 KB/391921, heapSize ~456.52 KB/467480, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 40ms, sequenceid=844, compaction requested=false 2023-07-24 18:10:58,571 INFO [M:0;jenkins-hbase4:34677] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:58,571 DEBUG [M:0;jenkins-hbase4:34677] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:10:58,576 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:58,576 INFO [M:0;jenkins-hbase4:34677] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 18:10:58,577 INFO [M:0;jenkins-hbase4:34677] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34677 2023-07-24 18:10:58,582 DEBUG [M:0;jenkins-hbase4:34677] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34677,1690222237492 already deleted, retry=false 2023-07-24 18:10:59,052 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:59,052 INFO [M:0;jenkins-hbase4:34677] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34677,1690222237492; zookeeper connection closed. 2023-07-24 18:10:59,052 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:34677-0x101988716b40000, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:59,152 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:59,152 INFO [RS:3;jenkins-hbase4:41915] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41915,1690222243305; zookeeper connection closed. 2023-07-24 18:10:59,152 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41915-0x101988716b4000b, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:59,152 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4e22c52a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4e22c52a 2023-07-24 18:10:59,252 INFO [RS:0;jenkins-hbase4:43449] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43449,1690222239527; zookeeper connection closed. 2023-07-24 18:10:59,252 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:59,253 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:43449-0x101988716b40001, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:59,253 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@76563c42] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@76563c42 2023-07-24 18:10:59,353 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:59,353 INFO [RS:4;jenkins-hbase4:37467] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37467,1690222246245; zookeeper connection closed. 2023-07-24 18:10:59,353 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37467-0x101988716b4000d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:59,353 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5bddb371] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5bddb371 2023-07-24 18:10:59,353 INFO [Listener at localhost/44627] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-24 18:10:59,353 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-24 18:11:01,354 DEBUG [Listener at localhost/44627] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 18:11:01,354 DEBUG [Listener at localhost/44627] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 18:11:01,354 DEBUG [Listener at localhost/44627] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 18:11:01,354 DEBUG [Listener at localhost/44627] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 18:11:01,355 INFO [Listener at localhost/44627] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:01,355 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,356 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,356 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:01,356 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,356 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:01,356 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:01,357 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36473 2023-07-24 18:11:01,358 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:01,359 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:01,360 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36473 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:01,363 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:364730x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:01,363 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36473-0x101988716b40010 connected 2023-07-24 18:11:01,366 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:01,366 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:01,367 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:01,367 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36473 2023-07-24 18:11:01,367 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36473 2023-07-24 18:11:01,367 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36473 2023-07-24 18:11:01,368 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36473 2023-07-24 18:11:01,368 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36473 2023-07-24 18:11:01,370 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:01,370 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:01,370 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:01,370 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 18:11:01,370 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:01,370 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:01,370 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:01,371 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 35485 2023-07-24 18:11:01,371 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:01,372 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,373 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78fbadc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:01,373 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,373 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4a3cf501{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:01,488 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:01,489 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:01,489 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:01,490 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 18:11:01,490 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,491 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3bdce63f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-35485-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1725955665791371567/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:11:01,493 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@727a60af{HTTP/1.1, (http/1.1)}{0.0.0.0:35485} 2023-07-24 18:11:01,493 INFO [Listener at localhost/44627] server.Server(415): Started @29713ms 2023-07-24 18:11:01,493 INFO [Listener at localhost/44627] master.HMaster(444): hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9, hbase.cluster.distributed=false 2023-07-24 18:11:01,495 DEBUG [pool-351-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-24 18:11:01,513 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:01,513 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,513 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,513 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:01,514 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,514 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:01,514 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:01,515 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37389 2023-07-24 18:11:01,516 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:01,517 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:01,518 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:01,519 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:01,520 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37389 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:01,525 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:373890x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:01,526 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:373890x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:01,526 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37389-0x101988716b40011 connected 2023-07-24 18:11:01,527 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:01,527 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:01,528 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37389 2023-07-24 18:11:01,530 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37389 2023-07-24 18:11:01,534 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37389 2023-07-24 18:11:01,536 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37389 2023-07-24 18:11:01,537 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37389 2023-07-24 18:11:01,539 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:01,539 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:01,539 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:01,540 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:01,540 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:01,540 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:01,540 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:01,541 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 34001 2023-07-24 18:11:01,541 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:01,547 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,547 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@42291629{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:01,547 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,547 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@393a8f0d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:01,668 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:01,669 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:01,669 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:01,669 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 18:11:01,670 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,670 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@d8ea3d8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-34001-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2358792935315841413/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:01,672 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@77124f61{HTTP/1.1, (http/1.1)}{0.0.0.0:34001} 2023-07-24 18:11:01,672 INFO [Listener at localhost/44627] server.Server(415): Started @29892ms 2023-07-24 18:11:01,683 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:01,683 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,683 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,684 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:01,684 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,684 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:01,684 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:01,684 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35775 2023-07-24 18:11:01,685 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:01,686 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:01,687 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:01,688 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:01,688 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35775 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:01,692 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:357750x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:01,695 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35775-0x101988716b40012 connected 2023-07-24 18:11:01,695 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:01,696 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:01,696 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:01,697 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35775 2023-07-24 18:11:01,697 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35775 2023-07-24 18:11:01,697 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35775 2023-07-24 18:11:01,697 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35775 2023-07-24 18:11:01,698 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35775 2023-07-24 18:11:01,700 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:01,700 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:01,700 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:01,700 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:01,700 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:01,701 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:01,701 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:01,701 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 44333 2023-07-24 18:11:01,701 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:01,703 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,703 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b179b51{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:01,704 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,704 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@75f849ee{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:01,822 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:01,824 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:01,824 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:01,824 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:11:01,825 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,826 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@398c60cb{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-44333-hbase-server-2_4_18-SNAPSHOT_jar-_-any-486487666266019018/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:01,828 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@8ee8361{HTTP/1.1, (http/1.1)}{0.0.0.0:44333} 2023-07-24 18:11:01,828 INFO [Listener at localhost/44627] server.Server(415): Started @30048ms 2023-07-24 18:11:01,840 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:01,840 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,840 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,840 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:01,840 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:01,841 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:01,841 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:01,842 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35553 2023-07-24 18:11:01,842 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:01,843 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:01,844 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:01,845 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:01,846 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35553 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:01,852 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:355530x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:01,853 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:355530x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:01,855 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35553-0x101988716b40013 connected 2023-07-24 18:11:01,856 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:01,856 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:01,857 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35553 2023-07-24 18:11:01,857 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35553 2023-07-24 18:11:01,857 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35553 2023-07-24 18:11:01,858 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35553 2023-07-24 18:11:01,858 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35553 2023-07-24 18:11:01,860 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:01,860 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:01,861 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:01,861 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:01,861 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:01,861 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:01,861 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:01,862 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 40215 2023-07-24 18:11:01,862 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:01,877 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,877 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@101b7825{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:01,877 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:01,878 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7c9dc244{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:02,005 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:02,006 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:02,006 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:02,006 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:11:02,008 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,008 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7221d2d9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-40215-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8692508148199996775/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:02,010 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@6da15dcc{HTTP/1.1, (http/1.1)}{0.0.0.0:40215} 2023-07-24 18:11:02,010 INFO [Listener at localhost/44627] server.Server(415): Started @30230ms 2023-07-24 18:11:02,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:02,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@211dfb1c{HTTP/1.1, (http/1.1)}{0.0.0.0:41505} 2023-07-24 18:11:02,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @30240ms 2023-07-24 18:11:02,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,022 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:11:02,023 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,024 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:02,024 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:02,024 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:02,024 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:02,025 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:02,026 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:11:02,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36473,1690222261355 from backup master directory 2023-07-24 18:11:02,028 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:11:02,029 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,029 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:11:02,029 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:02,029 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,078 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x40869261 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:02,083 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2de1ccd8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:02,083 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:11:02,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 18:11:02,084 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:02,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,34677,1690222237492 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,34677,1690222237492-dead as it is dead 2023-07-24 18:11:02,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,34677,1690222237492-dead/jenkins-hbase4.apache.org%2C34677%2C1690222237492.1690222240574 2023-07-24 18:11:02,099 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,34677,1690222237492-dead/jenkins-hbase4.apache.org%2C34677%2C1690222237492.1690222240574 after 5ms 2023-07-24 18:11:02,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,34677,1690222237492-dead/jenkins-hbase4.apache.org%2C34677%2C1690222237492.1690222240574 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C34677%2C1690222237492.1690222240574 2023-07-24 18:11:02,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,34677,1690222237492-dead 2023-07-24 18:11:02,101 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36473%2C1690222261355, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,36473,1690222261355, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/oldWALs, maxLogs=10 2023-07-24 18:11:02,125 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:02,132 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:02,133 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:02,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,36473,1690222261355/jenkins-hbase4.apache.org%2C36473%2C1690222261355.1690222262103 2023-07-24 18:11:02,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK]] 2023-07-24 18:11:02,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:02,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:02,140 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:02,140 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:02,143 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:02,144 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 18:11:02,144 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 18:11:02,152 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e38e113787d94d1a96a83aa49006b270 2023-07-24 18:11:02,152 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:02,153 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-24 18:11:02,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C34677%2C1690222237492.1690222240574 2023-07-24 18:11:02,192 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 995, firstSequenceIdInLog=3, maxSequenceIdInLog=846, path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C34677%2C1690222237492.1690222240574 2023-07-24 18:11:02,194 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C34677%2C1690222237492.1690222240574 2023-07-24 18:11:02,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:02,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/846.seqid, newMaxSeqId=846, maxSeqId=1 2023-07-24 18:11:02,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=847; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11129539040, jitterRate=0.03651909530162811}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:02,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:11:02,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 18:11:02,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 18:11:02,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 18:11:02,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 18:11:02,205 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 18:11:02,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-24 18:11:02,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-24 18:11:02,215 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-24 18:11:02,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-24 18:11:02,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-24 18:11:02,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, REOPEN/MOVE 2023-07-24 18:11:02,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 18:11:02,217 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=18, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,34741,1690222239908, splitWal=true, meta=false 2023-07-24 18:11:02,217 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=19, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-24 18:11:02,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:11:02,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:11:02,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=26, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:11:02,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=27, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:11:02,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=48, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:11:02,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=69, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:11:02,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=70, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE 2023-07-24 18:11:02,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=73, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:02,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:11:02,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=77, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:11:02,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=80, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:11:02,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=81, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:02,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:11:02,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=85, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:11:02,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:11:02,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=89, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:11:02,220 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=92, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 18:11:02,221 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=93, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 18:11:02,221 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690222254621 type: FLUSH version: 2 ttl: 0 ) 2023-07-24 18:11:02,221 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=97, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:11:02,221 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:11:02,221 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:11:02,222 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:11:02,222 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=105, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-24 18:11:02,222 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=106, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:11:02,223 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:11:02,223 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:11:02,223 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=113, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:11:02,223 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=114, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 18:11:02,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 17 msec 2023-07-24 18:11:02,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 18:11:02,225 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-24 18:11:02,226 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase4.apache.org,41915,1690222243305, table=hbase:meta, region=1588230740 2023-07-24 18:11:02,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 4 possibly 'live' servers, and 0 'splitting'. 2023-07-24 18:11:02,230 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37467,1690222246245 already deleted, retry=false 2023-07-24 18:11:02,231 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,37467,1690222246245 on jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,232 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=115, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,37467,1690222246245, splitWal=true, meta=false 2023-07-24 18:11:02,232 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=115 for jenkins-hbase4.apache.org,37467,1690222246245 (carryingMeta=false) jenkins-hbase4.apache.org,37467,1690222246245/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@49dc2542[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 18:11:02,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35913,1690222239741 already deleted, retry=false 2023-07-24 18:11:02,233 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,35913,1690222239741 on jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,35913,1690222239741, splitWal=true, meta=false 2023-07-24 18:11:02,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=116 for jenkins-hbase4.apache.org,35913,1690222239741 (carryingMeta=false) jenkins-hbase4.apache.org,35913,1690222239741/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@519d43d7[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 18:11:02,235 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43449,1690222239527 already deleted, retry=false 2023-07-24 18:11:02,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,43449,1690222239527 on jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,236 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,43449,1690222239527, splitWal=true, meta=false 2023-07-24 18:11:02,236 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=117 for jenkins-hbase4.apache.org,43449,1690222239527 (carryingMeta=false) jenkins-hbase4.apache.org,43449,1690222239527/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4c1ec304[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 18:11:02,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41915,1690222243305 already deleted, retry=false 2023-07-24 18:11:02,238 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,41915,1690222243305 on jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=118, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,41915,1690222243305, splitWal=true, meta=true 2023-07-24 18:11:02,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=118 for jenkins-hbase4.apache.org,41915,1690222243305 (carryingMeta=true) jenkins-hbase4.apache.org,41915,1690222243305/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@74b83b40[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 18:11:02,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-24 18:11:02,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 18:11:02,240 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 18:11:02,241 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 18:11:02,241 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 18:11:02,242 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 18:11:02,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:02,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:02,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:02,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:02,244 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:02,245 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36473,1690222261355, sessionid=0x101988716b40010, setting cluster-up flag (Was=false) 2023-07-24 18:11:02,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 18:11:02,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 18:11:02,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:02,264 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 18:11:02,265 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 18:11:02,266 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-24 18:11:02,267 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:02,267 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 18:11:02,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 18:11:02,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-24 18:11:02,270 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:02,270 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:41915 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41915 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:02,272 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:41915 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41915 2023-07-24 18:11:02,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:11:02,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:11:02,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:11:02,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:11:02,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:02,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:02,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:02,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:02,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 18:11:02,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:02,284 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690222292293 2023-07-24 18:11:02,294 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 18:11:02,294 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 18:11:02,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 18:11:02,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 18:11:02,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 18:11:02,295 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 18:11:02,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,300 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41915,1690222243305; numProcessing=1 2023-07-24 18:11:02,300 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=118, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,41915,1690222243305, splitWal=true, meta=true 2023-07-24 18:11:02,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 18:11:02,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 18:11:02,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 18:11:02,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 18:11:02,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 18:11:02,304 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222262303,5,FailOnTimeoutGroup] 2023-07-24 18:11:02,306 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222262304,5,FailOnTimeoutGroup] 2023-07-24 18:11:02,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 18:11:02,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690222262306, completionTime=-1 2023-07-24 18:11:02,306 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-24 18:11:02,306 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-24 18:11:02,307 DEBUG [PEWorker-5] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35913,1690222239741; numProcessing=2 2023-07-24 18:11:02,307 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=118, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,41915,1690222243305, splitWal=true, meta=true, isMeta: true 2023-07-24 18:11:02,307 INFO [PEWorker-5] procedure.ServerCrashProcedure(161): Start pid=116, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,35913,1690222239741, splitWal=true, meta=false 2023-07-24 18:11:02,307 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37467,1690222246245; numProcessing=3 2023-07-24 18:11:02,308 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=115, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,37467,1690222246245, splitWal=true, meta=false 2023-07-24 18:11:02,310 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305-splitting 2023-07-24 18:11:02,310 DEBUG [PEWorker-4] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43449,1690222239527; numProcessing=4 2023-07-24 18:11:02,310 INFO [PEWorker-4] procedure.ServerCrashProcedure(161): Start pid=117, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43449,1690222239527, splitWal=true, meta=false 2023-07-24 18:11:02,311 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305-splitting dir is empty, no logs to split. 2023-07-24 18:11:02,311 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,41915,1690222243305 WAL count=0, meta=true 2023-07-24 18:11:02,312 INFO [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:11:02,312 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:11:02,313 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305-splitting dir is empty, no logs to split. 2023-07-24 18:11:02,313 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,41915,1690222243305 WAL count=0, meta=true 2023-07-24 18:11:02,313 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,41915,1690222243305 WAL splitting is done? wals=0, meta=true 2023-07-24 18:11:02,314 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 18:11:02,316 DEBUG [RS:0;jenkins-hbase4:37389] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:02,316 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=119, ppid=118, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 18:11:02,316 DEBUG [RS:1;jenkins-hbase4:35775] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:02,319 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=119, ppid=118, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 18:11:02,319 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:11:02,321 DEBUG [RS:2;jenkins-hbase4:35553] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:02,321 DEBUG [RS:0;jenkins-hbase4:37389] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:02,321 DEBUG [RS:0;jenkins-hbase4:37389] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:02,323 DEBUG [RS:1;jenkins-hbase4:35775] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:02,323 DEBUG [RS:1;jenkins-hbase4:35775] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:02,324 DEBUG [RS:2;jenkins-hbase4:35553] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:02,324 DEBUG [RS:2;jenkins-hbase4:35553] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:02,324 DEBUG [RS:0;jenkins-hbase4:37389] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:02,325 DEBUG [RS:0;jenkins-hbase4:37389] zookeeper.ReadOnlyZKClient(139): Connect 0x1686a54d to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:02,325 DEBUG [RS:1;jenkins-hbase4:35775] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:02,327 DEBUG [RS:1;jenkins-hbase4:35775] zookeeper.ReadOnlyZKClient(139): Connect 0x66c92191 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:02,327 DEBUG [RS:2;jenkins-hbase4:35553] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:02,329 DEBUG [RS:2;jenkins-hbase4:35553] zookeeper.ReadOnlyZKClient(139): Connect 0x1aed33ba to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:02,337 DEBUG [RS:0;jenkins-hbase4:37389] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@184159ed, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:02,337 DEBUG [RS:0;jenkins-hbase4:37389] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c7ece3d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:02,337 DEBUG [RS:1;jenkins-hbase4:35775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@680ed338, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:02,338 DEBUG [RS:1;jenkins-hbase4:35775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6c314ae1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:02,338 DEBUG [RS:2;jenkins-hbase4:35553] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@142ace97, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:02,338 DEBUG [RS:2;jenkins-hbase4:35553] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@101558ca, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:02,347 DEBUG [RS:2;jenkins-hbase4:35553] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:35553 2023-07-24 18:11:02,347 DEBUG [RS:1;jenkins-hbase4:35775] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35775 2023-07-24 18:11:02,347 INFO [RS:2;jenkins-hbase4:35553] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:02,347 INFO [RS:2;jenkins-hbase4:35553] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:02,347 INFO [RS:1;jenkins-hbase4:35775] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:02,347 INFO [RS:1;jenkins-hbase4:35775] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:02,347 DEBUG [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:02,347 DEBUG [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:02,348 INFO [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36473,1690222261355 with isa=jenkins-hbase4.apache.org/172.31.14.131:35775, startcode=1690222261683 2023-07-24 18:11:02,348 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36473,1690222261355 with isa=jenkins-hbase4.apache.org/172.31.14.131:35553, startcode=1690222261840 2023-07-24 18:11:02,348 DEBUG [RS:2;jenkins-hbase4:35553] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:02,348 DEBUG [RS:1;jenkins-hbase4:35775] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:02,350 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37389 2023-07-24 18:11:02,350 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38479, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:02,350 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49781, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:02,350 INFO [RS:0;jenkins-hbase4:37389] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:02,351 INFO [RS:0;jenkins-hbase4:37389] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:02,351 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:02,351 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36473] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:02,351 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:02,352 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 18:11:02,352 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36473] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:02,352 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:02,352 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36473,1690222261355 with isa=jenkins-hbase4.apache.org/172.31.14.131:37389, startcode=1690222261512 2023-07-24 18:11:02,352 DEBUG [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:11:02,352 DEBUG [RS:0;jenkins-hbase4:37389] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:02,352 DEBUG [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:11:02,352 DEBUG [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:11:02,352 DEBUG [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35485 2023-07-24 18:11:02,352 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 18:11:02,352 DEBUG [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:11:02,353 DEBUG [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35485 2023-07-24 18:11:02,353 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37435, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:02,354 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36473] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,354 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:02,354 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 18:11:02,354 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:11:02,354 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:11:02,354 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35485 2023-07-24 18:11:02,356 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=50ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-24 18:11:02,357 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:02,358 DEBUG [RS:1;jenkins-hbase4:35775] zookeeper.ZKUtil(162): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:02,358 WARN [RS:1;jenkins-hbase4:35775] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:02,358 INFO [RS:1;jenkins-hbase4:35775] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:02,358 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37389,1690222261512] 2023-07-24 18:11:02,358 DEBUG [RS:0;jenkins-hbase4:37389] zookeeper.ZKUtil(162): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,358 DEBUG [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:02,358 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35775,1690222261683] 2023-07-24 18:11:02,359 WARN [RS:0;jenkins-hbase4:37389] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:02,359 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35553,1690222261840] 2023-07-24 18:11:02,359 INFO [RS:0;jenkins-hbase4:37389] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:02,359 DEBUG [RS:2;jenkins-hbase4:35553] zookeeper.ZKUtil(162): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:02,359 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,359 WARN [RS:2;jenkins-hbase4:35553] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:02,359 INFO [RS:2;jenkins-hbase4:35553] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:02,359 DEBUG [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:02,365 DEBUG [RS:1;jenkins-hbase4:35775] zookeeper.ZKUtil(162): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,365 DEBUG [RS:2;jenkins-hbase4:35553] zookeeper.ZKUtil(162): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,365 DEBUG [RS:0;jenkins-hbase4:37389] zookeeper.ZKUtil(162): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,365 DEBUG [RS:1;jenkins-hbase4:35775] zookeeper.ZKUtil(162): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:02,365 DEBUG [RS:2;jenkins-hbase4:35553] zookeeper.ZKUtil(162): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:02,365 DEBUG [RS:0;jenkins-hbase4:37389] zookeeper.ZKUtil(162): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:02,366 DEBUG [RS:1;jenkins-hbase4:35775] zookeeper.ZKUtil(162): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:02,366 DEBUG [RS:2;jenkins-hbase4:35553] zookeeper.ZKUtil(162): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:02,366 DEBUG [RS:0;jenkins-hbase4:37389] zookeeper.ZKUtil(162): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:02,367 DEBUG [RS:1;jenkins-hbase4:35775] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:02,367 DEBUG [RS:2;jenkins-hbase4:35553] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:02,367 INFO [RS:1;jenkins-hbase4:35775] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:02,367 INFO [RS:2;jenkins-hbase4:35553] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:02,367 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:02,368 INFO [RS:0;jenkins-hbase4:37389] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:02,368 INFO [RS:1;jenkins-hbase4:35775] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:02,369 INFO [RS:1;jenkins-hbase4:35775] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:02,369 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,372 INFO [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:02,372 INFO [RS:0;jenkins-hbase4:37389] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:02,373 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:41915 this server is in the failed servers list 2023-07-24 18:11:02,375 INFO [RS:2;jenkins-hbase4:35553] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:02,377 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,377 INFO [RS:0;jenkins-hbase4:37389] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:02,377 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,377 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,377 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:02,377 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,377 INFO [RS:2;jenkins-hbase4:35553] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:02,378 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,378 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,378 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,378 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,378 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:02,378 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,378 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,378 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,378 DEBUG [RS:1;jenkins-hbase4:35775] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,379 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:02,379 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,379 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,379 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,380 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,380 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,380 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,380 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,380 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,380 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,380 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,381 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,381 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,381 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,381 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:02,381 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,381 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,381 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,381 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,381 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,382 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,382 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:02,382 DEBUG [RS:0;jenkins-hbase4:37389] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,382 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,382 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,382 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,382 DEBUG [RS:2;jenkins-hbase4:35553] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:02,387 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,387 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,387 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,387 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,390 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,390 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,390 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,390 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,393 INFO [RS:1;jenkins-hbase4:35775] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:02,393 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35775,1690222261683-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,404 INFO [RS:1;jenkins-hbase4:35775] regionserver.Replication(203): jenkins-hbase4.apache.org,35775,1690222261683 started 2023-07-24 18:11:02,404 INFO [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35775,1690222261683, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35775, sessionid=0x101988716b40012 2023-07-24 18:11:02,404 DEBUG [RS:1;jenkins-hbase4:35775] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:02,404 DEBUG [RS:1;jenkins-hbase4:35775] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:02,405 DEBUG [RS:1;jenkins-hbase4:35775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35775,1690222261683' 2023-07-24 18:11:02,405 DEBUG [RS:1;jenkins-hbase4:35775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:02,405 INFO [RS:2;jenkins-hbase4:35553] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:02,405 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35553,1690222261840-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,405 DEBUG [RS:1;jenkins-hbase4:35775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:02,405 INFO [RS:0;jenkins-hbase4:37389] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:02,405 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37389,1690222261512-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,406 DEBUG [RS:1;jenkins-hbase4:35775] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:02,406 DEBUG [RS:1;jenkins-hbase4:35775] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:02,406 DEBUG [RS:1;jenkins-hbase4:35775] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:02,406 DEBUG [RS:1;jenkins-hbase4:35775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35775,1690222261683' 2023-07-24 18:11:02,406 DEBUG [RS:1;jenkins-hbase4:35775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:02,406 DEBUG [RS:1;jenkins-hbase4:35775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:02,406 DEBUG [RS:1;jenkins-hbase4:35775] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:02,407 INFO [RS:1;jenkins-hbase4:35775] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 18:11:02,409 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,409 DEBUG [RS:1;jenkins-hbase4:35775] zookeeper.ZKUtil(398): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 18:11:02,409 INFO [RS:1;jenkins-hbase4:35775] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 18:11:02,410 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,410 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,418 INFO [RS:0;jenkins-hbase4:37389] regionserver.Replication(203): jenkins-hbase4.apache.org,37389,1690222261512 started 2023-07-24 18:11:02,418 INFO [RS:2;jenkins-hbase4:35553] regionserver.Replication(203): jenkins-hbase4.apache.org,35553,1690222261840 started 2023-07-24 18:11:02,418 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37389,1690222261512, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37389, sessionid=0x101988716b40011 2023-07-24 18:11:02,418 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35553,1690222261840, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35553, sessionid=0x101988716b40013 2023-07-24 18:11:02,419 DEBUG [RS:0;jenkins-hbase4:37389] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:02,419 DEBUG [RS:2;jenkins-hbase4:35553] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:02,419 DEBUG [RS:2;jenkins-hbase4:35553] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:02,419 DEBUG [RS:2;jenkins-hbase4:35553] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35553,1690222261840' 2023-07-24 18:11:02,419 DEBUG [RS:2;jenkins-hbase4:35553] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:02,419 DEBUG [RS:0;jenkins-hbase4:37389] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,419 DEBUG [RS:0;jenkins-hbase4:37389] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37389,1690222261512' 2023-07-24 18:11:02,419 DEBUG [RS:0;jenkins-hbase4:37389] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:02,419 DEBUG [RS:2;jenkins-hbase4:35553] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:02,419 DEBUG [RS:0;jenkins-hbase4:37389] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:02,420 DEBUG [RS:2;jenkins-hbase4:35553] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:02,420 DEBUG [RS:2;jenkins-hbase4:35553] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:02,420 DEBUG [RS:0;jenkins-hbase4:37389] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:02,420 DEBUG [RS:0;jenkins-hbase4:37389] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:02,420 DEBUG [RS:0;jenkins-hbase4:37389] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,420 DEBUG [RS:0;jenkins-hbase4:37389] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37389,1690222261512' 2023-07-24 18:11:02,420 DEBUG [RS:0;jenkins-hbase4:37389] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:02,420 DEBUG [RS:2;jenkins-hbase4:35553] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:02,420 DEBUG [RS:2;jenkins-hbase4:35553] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35553,1690222261840' 2023-07-24 18:11:02,420 DEBUG [RS:2;jenkins-hbase4:35553] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:02,420 DEBUG [RS:0;jenkins-hbase4:37389] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:02,420 DEBUG [RS:2;jenkins-hbase4:35553] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:02,421 DEBUG [RS:0;jenkins-hbase4:37389] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:02,421 DEBUG [RS:2;jenkins-hbase4:35553] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:02,421 INFO [RS:0;jenkins-hbase4:37389] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 18:11:02,421 INFO [RS:2;jenkins-hbase4:35553] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 18:11:02,421 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,421 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,421 DEBUG [RS:0;jenkins-hbase4:37389] zookeeper.ZKUtil(398): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 18:11:02,421 INFO [RS:0;jenkins-hbase4:37389] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 18:11:02,421 DEBUG [RS:2;jenkins-hbase4:35553] zookeeper.ZKUtil(398): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 18:11:02,421 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,421 INFO [RS:2;jenkins-hbase4:35553] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 18:11:02,421 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,421 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,422 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:02,469 DEBUG [jenkins-hbase4:36473] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 18:11:02,470 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:02,470 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:02,470 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:02,470 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:02,470 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:11:02,473 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37389,1690222261512, state=OPENING 2023-07-24 18:11:02,475 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:11:02,475 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=120, ppid=119, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37389,1690222261512}] 2023-07-24 18:11:02,475 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:11:02,514 INFO [RS:1;jenkins-hbase4:35775] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35775%2C1690222261683, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35775,1690222261683, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:11:02,524 INFO [RS:0;jenkins-hbase4:37389] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37389%2C1690222261512, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:11:02,524 INFO [RS:2;jenkins-hbase4:35553] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35553%2C1690222261840, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35553,1690222261840, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:11:02,537 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:02,537 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:02,537 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:02,544 INFO [RS:1;jenkins-hbase4:35775] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35775,1690222261683/jenkins-hbase4.apache.org%2C35775%2C1690222261683.1690222262514 2023-07-24 18:11:02,546 DEBUG [RS:1;jenkins-hbase4:35775] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK]] 2023-07-24 18:11:02,548 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:02,548 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:02,548 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:02,561 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:02,561 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:02,561 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:02,562 INFO [RS:0;jenkins-hbase4:37389] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512/jenkins-hbase4.apache.org%2C37389%2C1690222261512.1690222262525 2023-07-24 18:11:02,563 DEBUG [RS:0;jenkins-hbase4:37389] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK]] 2023-07-24 18:11:02,567 INFO [RS:2;jenkins-hbase4:35553] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35553,1690222261840/jenkins-hbase4.apache.org%2C35553%2C1690222261840.1690222262525 2023-07-24 18:11:02,567 DEBUG [RS:2;jenkins-hbase4:35553] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK]] 2023-07-24 18:11:02,575 WARN [ReadOnlyZKClient-127.0.0.1:59012@0x40869261] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 18:11:02,575 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:02,576 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52494, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:02,577 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37389] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:52494 deadline: 1690222322577, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,632 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:02,633 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:11:02,634 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52506, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:11:02,640 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 18:11:02,640 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:02,642 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37389%2C1690222261512.meta, suffix=.meta, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:11:02,658 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:02,658 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:02,658 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:02,664 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512/jenkins-hbase4.apache.org%2C37389%2C1690222261512.meta.1690222262642.meta 2023-07-24 18:11:02,665 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK]] 2023-07-24 18:11:02,665 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:02,665 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:11:02,665 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 18:11:02,665 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 18:11:02,665 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 18:11:02,666 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:02,666 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 18:11:02,666 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 18:11:02,670 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:11:02,671 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info 2023-07-24 18:11:02,671 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info 2023-07-24 18:11:02,672 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:11:02,681 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/b7efcf27a4234e8cb81fe70d74c707cd 2023-07-24 18:11:02,689 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7f4dbb0133a4183b89b4fe6e9566541 2023-07-24 18:11:02,689 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/f7f4dbb0133a4183b89b4fe6e9566541 2023-07-24 18:11:02,689 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:02,689 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:11:02,690 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:11:02,690 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:11:02,691 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:11:02,697 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a5e22ad1da244f1a956859232c6e5f1 2023-07-24 18:11:02,697 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier/3a5e22ad1da244f1a956859232c6e5f1 2023-07-24 18:11:02,698 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:02,698 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:11:02,699 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table 2023-07-24 18:11:02,699 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table 2023-07-24 18:11:02,699 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:11:02,706 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/ed4eee4aebd4497b91a21f8f303e8b08 2023-07-24 18:11:02,711 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fde3e8b12951484eaef87586119cf207 2023-07-24 18:11:02,711 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/fde3e8b12951484eaef87586119cf207 2023-07-24 18:11:02,711 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:02,712 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:11:02,713 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:11:02,716 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:11:02,717 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:11:02,718 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=152; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11833612320, jitterRate=0.10209102928638458}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:11:02,718 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:11:02,723 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=120, masterSystemTime=1690222262632 2023-07-24 18:11:02,728 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 18:11:02,728 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 18:11:02,729 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37389,1690222261512, state=OPEN 2023-07-24 18:11:02,730 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:11:02,730 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:11:02,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=120, resume processing ppid=119 2023-07-24 18:11:02,735 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, ppid=119, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37389,1690222261512 in 255 msec 2023-07-24 18:11:02,743 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-24 18:11:02,743 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 421 msec 2023-07-24 18:11:02,899 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:02,900 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:43449 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43449 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:02,902 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:43449 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43449 2023-07-24 18:11:03,009 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43449 this server is in the failed servers list 2023-07-24 18:11:03,123 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:11:03,217 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43449 this server is in the failed servers list 2023-07-24 18:11:03,521 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43449 this server is in the failed servers list 2023-07-24 18:11:03,859 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1553ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1503ms 2023-07-24 18:11:04,028 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43449 this server is in the failed servers list 2023-07-24 18:11:05,036 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:43449 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43449 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:05,038 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:43449 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43449 2023-07-24 18:11:05,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3055ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3005ms 2023-07-24 18:11:06,814 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4508ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-24 18:11:06,814 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 18:11:06,819 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,37467,1690222246245, regionLocation=jenkins-hbase4.apache.org,37467,1690222246245, openSeqNum=11 2023-07-24 18:11:06,819 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=f93db382913b37f9661cac1fd8ee01a9, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,43449,1690222239527, regionLocation=jenkins-hbase4.apache.org,43449,1690222239527, openSeqNum=13 2023-07-24 18:11:06,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 18:11:06,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690222326819 2023-07-24 18:11:06,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690222386819 2023-07-24 18:11:06,819 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-24 18:11:06,845 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,41915,1690222243305 had 1 regions 2023-07-24 18:11:06,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36473,1690222261355-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:06,846 INFO [PEWorker-1] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,35913,1690222239741 had 0 regions 2023-07-24 18:11:06,846 INFO [PEWorker-4] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,37467,1690222246245 had 1 regions 2023-07-24 18:11:06,846 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,43449,1690222239527 had 1 regions 2023-07-24 18:11:06,846 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36473,1690222261355-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:06,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36473,1690222261355-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:06,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36473, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:06,847 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:06,848 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=118, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,41915,1690222243305, splitWal=true, meta=true, isMeta: false 2023-07-24 18:11:06,848 INFO [PEWorker-4] procedure.ServerCrashProcedure(300): Splitting WALs pid=115, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,37467,1690222246245, splitWal=true, meta=false, isMeta: false 2023-07-24 18:11:06,848 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. is NOT online; state={b3e0fb36cbe9750f5f2b47d078547932 state=OPEN, ts=1690222266819, server=jenkins-hbase4.apache.org,37467,1690222246245}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-24 18:11:06,848 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=117, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43449,1690222239527, splitWal=true, meta=false, isMeta: false 2023-07-24 18:11:06,848 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=116, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,35913,1690222239741, splitWal=true, meta=false, isMeta: false 2023-07-24 18:11:06,851 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305-splitting dir is empty, no logs to split. 2023-07-24 18:11:06,851 DEBUG [PEWorker-4] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37467,1690222246245-splitting 2023-07-24 18:11:06,851 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,41915,1690222243305 WAL count=0, meta=false 2023-07-24 18:11:06,852 DEBUG [PEWorker-5] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,43449,1690222239527-splitting 2023-07-24 18:11:06,853 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35913,1690222239741-splitting 2023-07-24 18:11:06,853 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,43449,1690222239527-splitting dir is empty, no logs to split. 2023-07-24 18:11:06,853 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,43449,1690222239527 WAL count=0, meta=false 2023-07-24 18:11:06,854 WARN [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase4.apache.org,37467,1690222246245/hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932., unknown_server=jenkins-hbase4.apache.org,43449,1690222239527/hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:06,854 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35913,1690222239741-splitting dir is empty, no logs to split. 2023-07-24 18:11:06,854 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,35913,1690222239741 WAL count=0, meta=false 2023-07-24 18:11:06,854 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37467,1690222246245-splitting dir is empty, no logs to split. 2023-07-24 18:11:06,854 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase4.apache.org,37467,1690222246245 WAL count=0, meta=false 2023-07-24 18:11:06,855 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41915,1690222243305-splitting dir is empty, no logs to split. 2023-07-24 18:11:06,855 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,41915,1690222243305 WAL count=0, meta=false 2023-07-24 18:11:06,855 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,41915,1690222243305 WAL splitting is done? wals=0, meta=false 2023-07-24 18:11:06,856 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,43449,1690222239527-splitting dir is empty, no logs to split. 2023-07-24 18:11:06,856 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,43449,1690222239527 WAL count=0, meta=false 2023-07-24 18:11:06,856 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,43449,1690222239527 WAL splitting is done? wals=0, meta=false 2023-07-24 18:11:06,856 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35913,1690222239741-splitting dir is empty, no logs to split. 2023-07-24 18:11:06,856 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,35913,1690222239741 WAL count=0, meta=false 2023-07-24 18:11:06,857 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,35913,1690222239741 WAL splitting is done? wals=0, meta=false 2023-07-24 18:11:06,857 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,41915,1690222243305 after splitting done 2023-07-24 18:11:06,857 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,41915,1690222243305 from processing; numProcessing=3 2023-07-24 18:11:06,858 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37467,1690222246245-splitting dir is empty, no logs to split. 2023-07-24 18:11:06,858 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase4.apache.org,37467,1690222246245 WAL count=0, meta=false 2023-07-24 18:11:06,858 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,37467,1690222246245 WAL splitting is done? wals=0, meta=false 2023-07-24 18:11:06,858 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,41915,1690222243305, splitWal=true, meta=true in 4.6190 sec 2023-07-24 18:11:06,859 INFO [PEWorker-5] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,43449,1690222239527 failed, ignore...File hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,43449,1690222239527-splitting does not exist. 2023-07-24 18:11:06,859 INFO [PEWorker-1] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,35913,1690222239741 failed, ignore...File hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35913,1690222239741-splitting does not exist. 2023-07-24 18:11:06,862 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN}] 2023-07-24 18:11:06,862 INFO [PEWorker-4] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,37467,1690222246245 failed, ignore...File hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37467,1690222246245-splitting does not exist. 2023-07-24 18:11:06,862 INFO [PEWorker-1] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,35913,1690222239741 after splitting done 2023-07-24 18:11:06,863 DEBUG [PEWorker-1] master.DeadServer(114): Removed jenkins-hbase4.apache.org,35913,1690222239741 from processing; numProcessing=2 2023-07-24 18:11:06,863 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN 2023-07-24 18:11:06,863 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=115, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN}] 2023-07-24 18:11:06,863 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 18:11:06,864 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,35913,1690222239741, splitWal=true, meta=false in 4.6300 sec 2023-07-24 18:11:06,864 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=122, ppid=115, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN 2023-07-24 18:11:06,864 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=122, ppid=115, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 18:11:06,864 DEBUG [jenkins-hbase4:36473] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 18:11:06,865 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:06,865 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:06,865 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:06,865 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:06,865 DEBUG [jenkins-hbase4:36473] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-24 18:11:06,866 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=122 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:06,866 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:06,866 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222266866"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222266866"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222266866"}]},"ts":"1690222266866"} 2023-07-24 18:11:06,866 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222266866"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222266866"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222266866"}]},"ts":"1690222266866"} 2023-07-24 18:11:06,868 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=123, ppid=122, state=RUNNABLE; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,37389,1690222261512}] 2023-07-24 18:11:06,868 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=121, state=RUNNABLE; OpenRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,35553,1690222261840}] 2023-07-24 18:11:07,022 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:07,022 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:11:07,023 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45282, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:11:07,024 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:07,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3e0fb36cbe9750f5f2b47d078547932, NAME => 'hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:07,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:07,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:07,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:07,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:07,026 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:07,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f93db382913b37f9661cac1fd8ee01a9, NAME => 'hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:07,026 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:07,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:11:07,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. service=MultiRowMutationService 2023-07-24 18:11:07,027 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 18:11:07,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:07,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:07,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:07,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:07,027 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:11:07,027 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:11:07,028 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3e0fb36cbe9750f5f2b47d078547932 columnFamilyName info 2023-07-24 18:11:07,028 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:07,029 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m 2023-07-24 18:11:07,029 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m 2023-07-24 18:11:07,029 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f93db382913b37f9661cac1fd8ee01a9 columnFamilyName m 2023-07-24 18:11:07,034 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:11:07,034 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:11:07,036 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c5e564844d934f86b57f8f0aadc04422 2023-07-24 18:11:07,036 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/c5e564844d934f86b57f8f0aadc04422 2023-07-24 18:11:07,039 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/99881a762fd443059bf23593fedbb752 2023-07-24 18:11:07,039 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(310): Store=b3e0fb36cbe9750f5f2b47d078547932/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:07,040 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:07,041 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:07,041 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d5cd966a907b4e6e86b91fb7d6889add 2023-07-24 18:11:07,041 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(310): Store=f93db382913b37f9661cac1fd8ee01a9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:07,042 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:07,043 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:07,044 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:07,045 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3e0fb36cbe9750f5f2b47d078547932; next sequenceid=21; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11776556160, jitterRate=0.09677726030349731}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:07,045 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:11:07,046 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932., pid=123, masterSystemTime=1690222267019 2023-07-24 18:11:07,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:07,047 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f93db382913b37f9661cac1fd8ee01a9; next sequenceid=77; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5621e51c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:07,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:11:07,048 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:07,048 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:07,048 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9., pid=124, masterSystemTime=1690222267022 2023-07-24 18:11:07,050 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=122 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPEN, openSeqNum=21, regionLocation=jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:07,050 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222267048"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222267048"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222267048"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222267048"}]},"ts":"1690222267048"} 2023-07-24 18:11:07,052 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:07,052 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=OPEN, openSeqNum=77, regionLocation=jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:07,052 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222267052"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222267052"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222267052"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222267052"}]},"ts":"1690222267052"} 2023-07-24 18:11:07,053 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:07,057 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:43449 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43449 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:07,057 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=123, resume processing ppid=122 2023-07-24 18:11:07,057 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, ppid=122, state=SUCCESS; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,37389,1690222261512 in 184 msec 2023-07-24 18:11:07,058 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4171 ms ago, cancelled=false, msg=Call to address=jenkins-hbase4.apache.org/172.31.14.131:43449 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43449, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9., hostname=jenkins-hbase4.apache.org,43449,1690222239527, seqNum=13, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase4.apache.org/172.31.14.131:43449 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43449 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43449 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:07,058 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:43449 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43449 2023-07-24 18:11:07,060 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=121 2023-07-24 18:11:07,060 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=121, state=SUCCESS; OpenRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,35553,1690222261840 in 186 msec 2023-07-24 18:11:07,061 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=115 2023-07-24 18:11:07,061 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=115, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN in 196 msec 2023-07-24 18:11:07,061 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,37467,1690222246245 after splitting done 2023-07-24 18:11:07,061 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,37467,1690222246245 from processing; numProcessing=1 2023-07-24 18:11:07,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=117 2023-07-24 18:11:07,062 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,43449,1690222239527 after splitting done 2023-07-24 18:11:07,062 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN in 201 msec 2023-07-24 18:11:07,062 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase4.apache.org,43449,1690222239527 from processing; numProcessing=0 2023-07-24 18:11:07,062 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,37467,1690222246245, splitWal=true, meta=false in 4.8300 sec 2023-07-24 18:11:07,064 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,43449,1690222239527, splitWal=true, meta=false in 4.8270 sec 2023-07-24 18:11:07,849 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-24 18:11:07,869 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 18:11:07,872 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 18:11:07,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.843sec 2023-07-24 18:11:07,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-24 18:11:07,872 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:11:07,873 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=125, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-24 18:11:07,874 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-24 18:11:07,875 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:11:07,876 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:11:07,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-24 18:11:07,878 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:07,878 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1 empty. 2023-07-24 18:11:07,879 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:07,879 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-24 18:11:07,881 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-24 18:11:07,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-24 18:11:07,884 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:07,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:07,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 18:11:07,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 18:11:07,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36473,1690222261355-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 18:11:07,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36473,1690222261355-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 18:11:07,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 18:11:07,893 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-24 18:11:07,894 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 785da8c92abeb2f759b91756349c6ee1, NAME => 'hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.tmp 2023-07-24 18:11:07,903 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:07,903 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 785da8c92abeb2f759b91756349c6ee1, disabling compactions & flushes 2023-07-24 18:11:07,903 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:07,903 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:07,903 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. after waiting 0 ms 2023-07-24 18:11:07,903 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:07,904 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:07,904 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 785da8c92abeb2f759b91756349c6ee1: 2023-07-24 18:11:07,906 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:11:07,907 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690222267906"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222267906"}]},"ts":"1690222267906"} 2023-07-24 18:11:07,908 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:11:07,909 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:11:07,909 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222267909"}]},"ts":"1690222267909"} 2023-07-24 18:11:07,910 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-24 18:11:07,914 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:07,914 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:07,914 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:07,914 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:07,914 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:11:07,915 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=126, ppid=125, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=785da8c92abeb2f759b91756349c6ee1, ASSIGN}] 2023-07-24 18:11:07,917 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, ppid=125, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=785da8c92abeb2f759b91756349c6ee1, ASSIGN 2023-07-24 18:11:07,917 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=126, ppid=125, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=785da8c92abeb2f759b91756349c6ee1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37389,1690222261512; forceNewPlan=false, retain=false 2023-07-24 18:11:07,921 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(139): Connect 0x7f642a12 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:07,926 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@46174207, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:07,928 DEBUG [hconnection-0x3865e7ea-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:07,930 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52520, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:07,935 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-24 18:11:07,935 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7f642a12 to 127.0.0.1:59012 2023-07-24 18:11:07,935 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:07,936 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase4.apache.org:36473 after: jenkins-hbase4.apache.org:36473 2023-07-24 18:11:07,936 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(139): Connect 0x663fa53d to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:07,941 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71f244a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:07,942 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:08,068 INFO [jenkins-hbase4:36473] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:11:08,069 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=785da8c92abeb2f759b91756349c6ee1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:08,069 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690222268069"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222268069"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222268069"}]},"ts":"1690222268069"} 2023-07-24 18:11:08,071 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; OpenRegionProcedure 785da8c92abeb2f759b91756349c6ee1, server=jenkins-hbase4.apache.org,37389,1690222261512}] 2023-07-24 18:11:08,168 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:11:08,226 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:08,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 785da8c92abeb2f759b91756349c6ee1, NAME => 'hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:08,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:08,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:08,227 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:08,227 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:08,228 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:08,230 DEBUG [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/q 2023-07-24 18:11:08,230 DEBUG [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/q 2023-07-24 18:11:08,230 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 785da8c92abeb2f759b91756349c6ee1 columnFamilyName q 2023-07-24 18:11:08,231 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] regionserver.HStore(310): Store=785da8c92abeb2f759b91756349c6ee1/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:08,231 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:08,232 DEBUG [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/u 2023-07-24 18:11:08,233 DEBUG [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/u 2023-07-24 18:11:08,233 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 785da8c92abeb2f759b91756349c6ee1 columnFamilyName u 2023-07-24 18:11:08,233 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] regionserver.HStore(310): Store=785da8c92abeb2f759b91756349c6ee1/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:08,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:08,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:08,236 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 18:11:08,236 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-24 18:11:08,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-24 18:11:08,239 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:08,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:11:08,241 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 785da8c92abeb2f759b91756349c6ee1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10714518080, jitterRate=-0.0021327435970306396}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-24 18:11:08,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 785da8c92abeb2f759b91756349c6ee1: 2023-07-24 18:11:08,242 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1., pid=127, masterSystemTime=1690222268222 2023-07-24 18:11:08,243 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:08,243 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:08,244 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=785da8c92abeb2f759b91756349c6ee1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:08,244 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690222268244"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222268244"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222268244"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222268244"}]},"ts":"1690222268244"} 2023-07-24 18:11:08,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-24 18:11:08,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; OpenRegionProcedure 785da8c92abeb2f759b91756349c6ee1, server=jenkins-hbase4.apache.org,37389,1690222261512 in 174 msec 2023-07-24 18:11:08,248 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=126, resume processing ppid=125 2023-07-24 18:11:08,248 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=126, ppid=125, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=785da8c92abeb2f759b91756349c6ee1, ASSIGN in 333 msec 2023-07-24 18:11:08,249 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:11:08,249 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222268249"}]},"ts":"1690222268249"} 2023-07-24 18:11:08,250 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-24 18:11:08,252 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:11:08,254 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, state=SUCCESS; CreateTableProcedure table=hbase:quota in 380 msec 2023-07-24 18:11:08,367 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 18:11:08,368 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 18:11:08,369 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 18:11:08,369 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-24 18:11:11,082 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:11,083 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46998, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:11,084 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 18:11:11,084 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 18:11:11,098 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:11,098 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:11,099 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:11,101 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-24 18:11:11,101 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36473,1690222261355] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 18:11:11,145 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 18:11:11,149 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52766, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 18:11:11,153 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-24 18:11:11,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36473] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 18:11:11,156 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(139): Connect 0x031a14e0 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:11,170 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5f03716a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:11,171 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:11,176 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [90,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:11,178 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:11,181 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101988716b4001b connected 2023-07-24 18:11:11,181 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:11,184 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58226, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:11,194 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-24 18:11:11,195 INFO [Listener at localhost/44627] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 18:11:11,195 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x663fa53d to 127.0.0.1:59012 2023-07-24 18:11:11,195 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:11,195 DEBUG [Listener at localhost/44627] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 18:11:11,195 DEBUG [Listener at localhost/44627] util.JVMClusterUtil(257): Found active master hash=1455240505, stopped=false 2023-07-24 18:11:11,195 DEBUG [Listener at localhost/44627] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:11:11,196 DEBUG [Listener at localhost/44627] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:11:11,196 DEBUG [Listener at localhost/44627] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 18:11:11,196 INFO [Listener at localhost/44627] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:11,197 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:11,197 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:11,197 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:11,198 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:11,198 INFO [Listener at localhost/44627] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 18:11:11,198 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:11,199 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:11,200 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:11,201 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x40869261 to 127.0.0.1:59012 2023-07-24 18:11:11,201 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:11,201 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:11,201 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:11,202 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37389,1690222261512' ***** 2023-07-24 18:11:11,202 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:11,202 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35775,1690222261683' ***** 2023-07-24 18:11:11,202 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:11,202 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35553,1690222261840' ***** 2023-07-24 18:11:11,202 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:11,202 INFO [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:11,202 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:11,202 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:11,202 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:11,219 INFO [RS:0;jenkins-hbase4:37389] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@d8ea3d8{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:11,219 INFO [RS:1;jenkins-hbase4:35775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@398c60cb{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:11,221 INFO [RS:0;jenkins-hbase4:37389] server.AbstractConnector(383): Stopped ServerConnector@77124f61{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:11,221 INFO [RS:1;jenkins-hbase4:35775] server.AbstractConnector(383): Stopped ServerConnector@8ee8361{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:11,221 INFO [RS:0;jenkins-hbase4:37389] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:11,221 INFO [RS:1;jenkins-hbase4:35775] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:11,221 INFO [RS:2;jenkins-hbase4:35553] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7221d2d9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:11,221 INFO [RS:1;jenkins-hbase4:35775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@75f849ee{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:11,221 INFO [RS:0;jenkins-hbase4:37389] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@393a8f0d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:11,222 INFO [RS:0;jenkins-hbase4:37389] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@42291629{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:11,222 INFO [RS:1;jenkins-hbase4:35775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b179b51{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:11,223 INFO [RS:0;jenkins-hbase4:37389] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:11,223 INFO [RS:0;jenkins-hbase4:37389] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:11,223 INFO [RS:0;jenkins-hbase4:37389] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:11,223 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(3305): Received CLOSE for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:11,223 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:11,224 INFO [RS:1;jenkins-hbase4:35775] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:11,225 INFO [RS:2;jenkins-hbase4:35553] server.AbstractConnector(383): Stopped ServerConnector@6da15dcc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:11,226 INFO [RS:2;jenkins-hbase4:35553] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:11,230 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(3305): Received CLOSE for 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:11,230 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:11,230 INFO [RS:2;jenkins-hbase4:35553] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7c9dc244{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:11,231 DEBUG [RS:0;jenkins-hbase4:37389] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1686a54d to 127.0.0.1:59012 2023-07-24 18:11:11,231 INFO [RS:2;jenkins-hbase4:35553] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@101b7825{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:11,231 DEBUG [RS:0;jenkins-hbase4:37389] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:11,231 INFO [RS:0;jenkins-hbase4:37389] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:11,231 INFO [RS:0;jenkins-hbase4:37389] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:11,231 INFO [RS:0;jenkins-hbase4:37389] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:11,231 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 18:11:11,231 INFO [RS:1;jenkins-hbase4:35775] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:11,231 INFO [RS:1;jenkins-hbase4:35775] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:11,231 INFO [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:11,231 DEBUG [RS:1;jenkins-hbase4:35775] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66c92191 to 127.0.0.1:59012 2023-07-24 18:11:11,231 DEBUG [RS:1;jenkins-hbase4:35775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:11,232 INFO [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35775,1690222261683; all regions closed. 2023-07-24 18:11:11,232 DEBUG [RS:1;jenkins-hbase4:35775] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 18:11:11,232 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:11,232 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-24 18:11:11,233 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1478): Online Regions={b3e0fb36cbe9750f5f2b47d078547932=hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932., 1588230740=hbase:meta,,1.1588230740, 785da8c92abeb2f759b91756349c6ee1=hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.} 2023-07-24 18:11:11,233 INFO [RS:2;jenkins-hbase4:35553] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:11,233 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1504): Waiting on 1588230740, 785da8c92abeb2f759b91756349c6ee1, b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:11,235 INFO [RS:2;jenkins-hbase4:35553] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:11,236 INFO [RS:2;jenkins-hbase4:35553] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:11,236 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(3305): Received CLOSE for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:11,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3e0fb36cbe9750f5f2b47d078547932, disabling compactions & flushes 2023-07-24 18:11:11,236 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:11:11,236 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:11,236 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:11,236 DEBUG [RS:2;jenkins-hbase4:35553] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1aed33ba to 127.0.0.1:59012 2023-07-24 18:11:11,236 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:11:11,237 DEBUG [RS:2;jenkins-hbase4:35553] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:11,237 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 18:11:11,237 DEBUG [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1478): Online Regions={f93db382913b37f9661cac1fd8ee01a9=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.} 2023-07-24 18:11:11,237 DEBUG [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1504): Waiting on f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:11,237 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f93db382913b37f9661cac1fd8ee01a9, disabling compactions & flushes 2023-07-24 18:11:11,236 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:11,237 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:11,237 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:11:11,238 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:11,238 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. after waiting 0 ms 2023-07-24 18:11:11,238 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:11,238 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f93db382913b37f9661cac1fd8ee01a9 1/1 column families, dataSize=242 B heapSize=648 B 2023-07-24 18:11:11,237 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. after waiting 0 ms 2023-07-24 18:11:11,238 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:11,238 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:11:11,238 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:11:11,238 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.05 KB heapSize=5.87 KB 2023-07-24 18:11:11,264 DEBUG [RS:1;jenkins-hbase4:35775] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:11:11,264 INFO [RS:1;jenkins-hbase4:35775] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35775%2C1690222261683:(num 1690222262514) 2023-07-24 18:11:11,265 DEBUG [RS:1;jenkins-hbase4:35775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:11,265 INFO [RS:1;jenkins-hbase4:35775] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:11,272 INFO [RS:1;jenkins-hbase4:35775] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:11,272 INFO [RS:1;jenkins-hbase4:35775] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:11,272 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:11,272 INFO [RS:1;jenkins-hbase4:35775] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:11,272 INFO [RS:1;jenkins-hbase4:35775] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:11,274 INFO [RS:1;jenkins-hbase4:35775] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35775 2023-07-24 18:11:11,288 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:11,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/recovered.edits/23.seqid, newMaxSeqId=23, maxSeqId=20 2023-07-24 18:11:11,292 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:11,292 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:11,292 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:11,292 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:11,292 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:11,294 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35775,1690222261683 2023-07-24 18:11:11,294 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:11,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:11,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:11:11,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:11,296 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35775,1690222261683] 2023-07-24 18:11:11,296 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35775,1690222261683; numProcessing=1 2023-07-24 18:11:11,298 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:11,300 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:11,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 785da8c92abeb2f759b91756349c6ee1, disabling compactions & flushes 2023-07-24 18:11:11,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:11,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:11,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. after waiting 0 ms 2023-07-24 18:11:11,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:11,319 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.97 KB at sequenceid=163 (bloomFilter=false), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/info/a79e17af74e44f32952a7d071379d76d 2023-07-24 18:11:11,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:11:11,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:11,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 785da8c92abeb2f759b91756349c6ee1: 2023-07-24 18:11:11,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:11,347 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=242 B at sequenceid=80 (bloomFilter=true), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp/m/d08a5ba50b5c4cb6b3b0378bbcc621b6 2023-07-24 18:11:11,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp/m/d08a5ba50b5c4cb6b3b0378bbcc621b6 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d08a5ba50b5c4cb6b3b0378bbcc621b6 2023-07-24 18:11:11,374 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d08a5ba50b5c4cb6b3b0378bbcc621b6, entries=2, sequenceid=80, filesize=5.0 K 2023-07-24 18:11:11,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~242 B/242, heapSize ~632 B/632, currentSize=0 B/0 for f93db382913b37f9661cac1fd8ee01a9 in 139ms, sequenceid=80, compaction requested=true 2023-07-24 18:11:11,395 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=86 B at sequenceid=163 (bloomFilter=false), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/table/230c7749dda64496b1ef6916ca5f4650 2023-07-24 18:11:11,396 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:11,396 INFO [RS:1;jenkins-hbase4:35775] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35775,1690222261683; zookeeper connection closed. 2023-07-24 18:11:11,396 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35775-0x101988716b40012, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:11,398 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35775,1690222261683 already deleted, retry=false 2023-07-24 18:11:11,398 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35775,1690222261683 expired; onlineServers=2 2023-07-24 18:11:11,398 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@d3fb791] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@d3fb791 2023-07-24 18:11:11,402 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/info/a79e17af74e44f32952a7d071379d76d as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/a79e17af74e44f32952a7d071379d76d 2023-07-24 18:11:11,406 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 18:11:11,406 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 18:11:11,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/recovered.edits/83.seqid, newMaxSeqId=83, maxSeqId=76 2023-07-24 18:11:11,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:11:11,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:11,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:11:11,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:11,413 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/a79e17af74e44f32952a7d071379d76d, entries=26, sequenceid=163, filesize=7.7 K 2023-07-24 18:11:11,414 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/table/230c7749dda64496b1ef6916ca5f4650 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/230c7749dda64496b1ef6916ca5f4650 2023-07-24 18:11:11,422 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/230c7749dda64496b1ef6916ca5f4650, entries=2, sequenceid=163, filesize=4.7 K 2023-07-24 18:11:11,424 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.05 KB/3126, heapSize ~5.59 KB/5720, currentSize=0 B/0 for 1588230740 in 185ms, sequenceid=163, compaction requested=true 2023-07-24 18:11:11,434 DEBUG [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 18:11:11,437 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35553,1690222261840; all regions closed. 2023-07-24 18:11:11,437 DEBUG [RS:2;jenkins-hbase4:35553] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 18:11:11,439 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-24 18:11:11,439 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-24 18:11:11,446 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/recovered.edits/166.seqid, newMaxSeqId=166, maxSeqId=151 2023-07-24 18:11:11,447 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:11:11,448 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:11:11,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:11:11,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 18:11:11,449 DEBUG [RS:2;jenkins-hbase4:35553] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:11:11,449 INFO [RS:2;jenkins-hbase4:35553] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35553%2C1690222261840:(num 1690222262525) 2023-07-24 18:11:11,449 DEBUG [RS:2;jenkins-hbase4:35553] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:11,449 INFO [RS:2;jenkins-hbase4:35553] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:11,449 INFO [RS:2;jenkins-hbase4:35553] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:11,449 INFO [RS:2;jenkins-hbase4:35553] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:11,449 INFO [RS:2;jenkins-hbase4:35553] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:11,449 INFO [RS:2;jenkins-hbase4:35553] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:11,450 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:11,451 INFO [RS:2;jenkins-hbase4:35553] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35553 2023-07-24 18:11:11,456 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:11,456 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:11,456 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35553,1690222261840 2023-07-24 18:11:11,456 ERROR [Listener at localhost/44627-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@5be22187 rejected from java.util.concurrent.ThreadPoolExecutor@5fddbe23[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 6] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-24 18:11:11,457 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35553,1690222261840] 2023-07-24 18:11:11,457 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35553,1690222261840; numProcessing=2 2023-07-24 18:11:11,462 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35553,1690222261840 already deleted, retry=false 2023-07-24 18:11:11,463 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35553,1690222261840 expired; onlineServers=1 2023-07-24 18:11:11,635 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37389,1690222261512; all regions closed. 2023-07-24 18:11:11,635 DEBUG [RS:0;jenkins-hbase4:37389] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 18:11:11,649 DEBUG [RS:0;jenkins-hbase4:37389] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:11:11,650 INFO [RS:0;jenkins-hbase4:37389] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37389%2C1690222261512.meta:.meta(num 1690222262642) 2023-07-24 18:11:11,677 DEBUG [RS:0;jenkins-hbase4:37389] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:11:11,677 INFO [RS:0;jenkins-hbase4:37389] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37389%2C1690222261512:(num 1690222262525) 2023-07-24 18:11:11,677 DEBUG [RS:0;jenkins-hbase4:37389] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:11,677 INFO [RS:0;jenkins-hbase4:37389] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:11,678 INFO [RS:0;jenkins-hbase4:37389] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:11,678 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:11,679 INFO [RS:0;jenkins-hbase4:37389] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37389 2023-07-24 18:11:11,700 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37389,1690222261512 2023-07-24 18:11:11,700 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:11,701 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37389,1690222261512] 2023-07-24 18:11:11,701 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37389,1690222261512; numProcessing=3 2023-07-24 18:11:11,702 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37389,1690222261512 already deleted, retry=false 2023-07-24 18:11:11,702 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37389,1690222261512 expired; onlineServers=0 2023-07-24 18:11:11,702 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36473,1690222261355' ***** 2023-07-24 18:11:11,702 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 18:11:11,703 DEBUG [M:0;jenkins-hbase4:36473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5282ab36, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:11,703 INFO [M:0;jenkins-hbase4:36473] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:11,713 INFO [M:0;jenkins-hbase4:36473] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3bdce63f{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:11:11,714 INFO [M:0;jenkins-hbase4:36473] server.AbstractConnector(383): Stopped ServerConnector@727a60af{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:11,714 INFO [M:0;jenkins-hbase4:36473] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:11,715 INFO [M:0;jenkins-hbase4:36473] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4a3cf501{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:11,715 INFO [M:0;jenkins-hbase4:36473] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78fbadc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:11,717 INFO [M:0;jenkins-hbase4:36473] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36473,1690222261355 2023-07-24 18:11:11,717 INFO [M:0;jenkins-hbase4:36473] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36473,1690222261355; all regions closed. 2023-07-24 18:11:11,717 DEBUG [M:0;jenkins-hbase4:36473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:11,717 INFO [M:0;jenkins-hbase4:36473] master.HMaster(1491): Stopping master jetty server 2023-07-24 18:11:11,722 INFO [M:0;jenkins-hbase4:36473] server.AbstractConnector(383): Stopped ServerConnector@211dfb1c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:11,723 DEBUG [M:0;jenkins-hbase4:36473] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 18:11:11,723 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 18:11:11,723 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222262304] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222262304,5,FailOnTimeoutGroup] 2023-07-24 18:11:11,723 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222262303] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222262303,5,FailOnTimeoutGroup] 2023-07-24 18:11:11,723 DEBUG [M:0;jenkins-hbase4:36473] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 18:11:11,723 INFO [M:0;jenkins-hbase4:36473] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 18:11:11,723 INFO [M:0;jenkins-hbase4:36473] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 18:11:11,731 INFO [M:0;jenkins-hbase4:36473] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:11,731 DEBUG [M:0;jenkins-hbase4:36473] master.HMaster(1512): Stopping service threads 2023-07-24 18:11:11,731 INFO [M:0;jenkins-hbase4:36473] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 18:11:11,732 ERROR [M:0;jenkins-hbase4:36473] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 18:11:11,732 INFO [M:0;jenkins-hbase4:36473] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 18:11:11,732 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 18:11:11,799 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:11,799 INFO [RS:2;jenkins-hbase4:35553] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35553,1690222261840; zookeeper connection closed. 2023-07-24 18:11:11,799 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:35553-0x101988716b40013, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:11,799 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@f0a896e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@f0a896e 2023-07-24 18:11:11,802 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:11,802 INFO [RS:0;jenkins-hbase4:37389] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37389,1690222261512; zookeeper connection closed. 2023-07-24 18:11:11,802 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:37389-0x101988716b40011, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:11,802 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@50d0219] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@50d0219 2023-07-24 18:11:11,802 INFO [Listener at localhost/44627] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-24 18:11:11,804 INFO [M:0;jenkins-hbase4:36473] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 18:11:11,804 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:11,805 INFO [M:0;jenkins-hbase4:36473] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 18:11:11,805 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:11,806 DEBUG [M:0;jenkins-hbase4:36473] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:11:11,806 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/master already deleted, retry=false 2023-07-24 18:11:11,806 INFO [M:0;jenkins-hbase4:36473] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:11,806 DEBUG [M:0;jenkins-hbase4:36473] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:11,806 DEBUG [M:0;jenkins-hbase4:36473] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:11:11,806 DEBUG [RegionServerTracker-0] master.ActiveMasterManager(335): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Failed delete of our master address node; KeeperErrorCode = NoNode for /hbase/master 2023-07-24 18:11:11,806 DEBUG [M:0;jenkins-hbase4:36473] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:11,806 INFO [M:0;jenkins-hbase4:36473] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=45.27 KB heapSize=54.85 KB 2023-07-24 18:11:11,807 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:11,857 INFO [M:0;jenkins-hbase4:36473] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=45.27 KB at sequenceid=958 (bloomFilter=true), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a0600db23e4541fe98b4a9376b202081 2023-07-24 18:11:11,864 DEBUG [M:0;jenkins-hbase4:36473] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a0600db23e4541fe98b4a9376b202081 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a0600db23e4541fe98b4a9376b202081 2023-07-24 18:11:11,879 INFO [M:0;jenkins-hbase4:36473] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a0600db23e4541fe98b4a9376b202081, entries=13, sequenceid=958, filesize=7.2 K 2023-07-24 18:11:11,880 INFO [M:0;jenkins-hbase4:36473] regionserver.HRegion(2948): Finished flush of dataSize ~45.27 KB/46355, heapSize ~54.84 KB/56152, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 74ms, sequenceid=958, compaction requested=false 2023-07-24 18:11:11,888 INFO [M:0;jenkins-hbase4:36473] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:11,888 DEBUG [M:0;jenkins-hbase4:36473] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:11:11,895 INFO [M:0;jenkins-hbase4:36473] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 18:11:11,895 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:11,900 INFO [M:0;jenkins-hbase4:36473] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36473 2023-07-24 18:11:11,902 DEBUG [M:0;jenkins-hbase4:36473] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36473,1690222261355 already deleted, retry=false 2023-07-24 18:11:12,004 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:12,004 INFO [M:0;jenkins-hbase4:36473] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36473,1690222261355; zookeeper connection closed. 2023-07-24 18:11:12,004 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:36473-0x101988716b40010, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:12,005 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-24 18:11:13,731 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:11:14,008 INFO [Listener at localhost/44627] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:14,008 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,008 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,009 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:14,009 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,009 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:14,009 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:14,010 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33035 2023-07-24 18:11:14,011 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:14,012 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:14,014 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33035 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:14,018 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:330350x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:14,018 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33035-0x101988716b4001c connected 2023-07-24 18:11:14,025 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:14,025 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:14,026 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:14,028 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33035 2023-07-24 18:11:14,029 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33035 2023-07-24 18:11:14,030 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33035 2023-07-24 18:11:14,034 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33035 2023-07-24 18:11:14,035 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33035 2023-07-24 18:11:14,037 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:14,037 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:14,037 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:14,037 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 18:11:14,038 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:14,038 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:14,038 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:14,038 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 40867 2023-07-24 18:11:14,038 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:14,043 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,043 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@729672e5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:14,043 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,043 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@65d94138{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:14,162 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:14,163 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:14,164 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:14,164 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:11:14,165 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,166 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@267a8b58{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-40867-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8571851719786460810/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:11:14,167 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@263442ac{HTTP/1.1, (http/1.1)}{0.0.0.0:40867} 2023-07-24 18:11:14,167 INFO [Listener at localhost/44627] server.Server(415): Started @42387ms 2023-07-24 18:11:14,167 INFO [Listener at localhost/44627] master.HMaster(444): hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9, hbase.cluster.distributed=false 2023-07-24 18:11:14,168 DEBUG [pool-523-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-24 18:11:14,180 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:14,181 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,181 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,181 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:14,181 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,181 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:14,181 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:14,182 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41163 2023-07-24 18:11:14,182 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:14,183 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:14,184 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:14,185 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:14,186 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41163 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:14,190 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:411630x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:14,191 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:411630x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:14,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41163-0x101988716b4001d connected 2023-07-24 18:11:14,192 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:14,192 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:14,193 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41163 2023-07-24 18:11:14,193 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41163 2023-07-24 18:11:14,194 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41163 2023-07-24 18:11:14,198 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41163 2023-07-24 18:11:14,199 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41163 2023-07-24 18:11:14,202 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:14,202 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:14,202 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:14,203 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:14,203 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:14,203 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:14,203 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:14,204 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 40541 2023-07-24 18:11:14,204 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:14,210 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,210 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5d1a92db{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:14,210 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,211 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1f201ec0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:14,337 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:14,338 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:14,338 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:14,339 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 18:11:14,340 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,341 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5113828{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-40541-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4751705132679350183/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:14,344 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@6b3653b6{HTTP/1.1, (http/1.1)}{0.0.0.0:40541} 2023-07-24 18:11:14,344 INFO [Listener at localhost/44627] server.Server(415): Started @42564ms 2023-07-24 18:11:14,358 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:14,358 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,358 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,358 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:14,358 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,358 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:14,358 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:14,359 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46835 2023-07-24 18:11:14,360 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:14,361 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:14,362 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:14,364 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:14,366 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46835 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:14,370 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:468350x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:14,371 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:468350x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:14,372 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:468350x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:14,372 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:468350x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:14,375 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46835-0x101988716b4001e connected 2023-07-24 18:11:14,382 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46835 2023-07-24 18:11:14,382 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46835 2023-07-24 18:11:14,386 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46835 2023-07-24 18:11:14,391 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46835 2023-07-24 18:11:14,391 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46835 2023-07-24 18:11:14,393 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:14,393 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:14,393 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:14,394 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:14,394 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:14,394 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:14,394 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:14,394 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 42675 2023-07-24 18:11:14,395 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:14,398 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,399 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3a32e641{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:14,399 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,399 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@215674ac{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:14,527 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:14,528 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:14,528 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:14,528 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:11:14,529 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,530 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@77a8a8e7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-42675-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1990729040085076690/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:14,532 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@2563764c{HTTP/1.1, (http/1.1)}{0.0.0.0:42675} 2023-07-24 18:11:14,532 INFO [Listener at localhost/44627] server.Server(415): Started @42752ms 2023-07-24 18:11:14,544 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:14,544 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,545 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,545 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:14,545 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:14,545 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:14,545 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:14,546 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41941 2023-07-24 18:11:14,546 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:14,548 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:14,548 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:14,549 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:14,551 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41941 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:14,556 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:419410x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:14,557 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41941-0x101988716b4001f connected 2023-07-24 18:11:14,557 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:14,558 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:14,558 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:14,564 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41941 2023-07-24 18:11:14,564 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41941 2023-07-24 18:11:14,566 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41941 2023-07-24 18:11:14,576 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41941 2023-07-24 18:11:14,578 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41941 2023-07-24 18:11:14,581 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:14,581 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:14,582 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:14,583 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:14,583 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:14,583 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:14,583 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:14,584 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 45373 2023-07-24 18:11:14,584 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:14,596 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,596 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@48ac8439{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:14,597 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,597 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7bdf23d8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:14,727 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:14,728 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:14,728 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:14,728 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:11:14,729 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:14,730 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1f8f1476{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-45373-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9043377230502438212/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:14,732 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@2259f828{HTTP/1.1, (http/1.1)}{0.0.0.0:45373} 2023-07-24 18:11:14,732 INFO [Listener at localhost/44627] server.Server(415): Started @42952ms 2023-07-24 18:11:14,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:14,744 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@40c0c4ae{HTTP/1.1, (http/1.1)}{0.0.0.0:42957} 2023-07-24 18:11:14,744 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42964ms 2023-07-24 18:11:14,744 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,746 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:11:14,746 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,748 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:14,749 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:14,749 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:14,749 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:14,749 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:14,750 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:11:14,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33035,1690222274007 from backup master directory 2023-07-24 18:11:14,753 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:11:14,754 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,754 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:14,754 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:11:14,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,776 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:14,816 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5236a4f8 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:14,822 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4bc9a3ed, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:14,822 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:11:14,823 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 18:11:14,826 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:14,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,36473,1690222261355 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,36473,1690222261355-dead as it is dead 2023-07-24 18:11:14,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,36473,1690222261355-dead/jenkins-hbase4.apache.org%2C36473%2C1690222261355.1690222262103 2023-07-24 18:11:14,834 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,36473,1690222261355-dead/jenkins-hbase4.apache.org%2C36473%2C1690222261355.1690222262103 after 1ms 2023-07-24 18:11:14,834 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,36473,1690222261355-dead/jenkins-hbase4.apache.org%2C36473%2C1690222261355.1690222262103 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C36473%2C1690222261355.1690222262103 2023-07-24 18:11:14,834 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,36473,1690222261355-dead 2023-07-24 18:11:14,835 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33035%2C1690222274007, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,33035,1690222274007, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/oldWALs, maxLogs=10 2023-07-24 18:11:14,850 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:14,850 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:14,850 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:14,853 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/WALs/jenkins-hbase4.apache.org,33035,1690222274007/jenkins-hbase4.apache.org%2C33035%2C1690222274007.1690222274837 2023-07-24 18:11:14,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK]] 2023-07-24 18:11:14,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:14,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:14,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:14,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:14,859 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:14,860 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 18:11:14,860 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 18:11:14,869 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a0600db23e4541fe98b4a9376b202081 2023-07-24 18:11:14,873 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/e38e113787d94d1a96a83aa49006b270 2023-07-24 18:11:14,874 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:14,874 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-24 18:11:14,875 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C36473%2C1690222261355.1690222262103 2023-07-24 18:11:14,881 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 128, firstSequenceIdInLog=848, maxSequenceIdInLog=960, path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C36473%2C1690222261355.1690222262103 2023-07-24 18:11:14,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C36473%2C1690222261355.1690222262103 2023-07-24 18:11:14,886 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:14,889 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/960.seqid, newMaxSeqId=960, maxSeqId=846 2023-07-24 18:11:14,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=961; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10664665760, jitterRate=-0.006775602698326111}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:14,890 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:11:14,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 18:11:14,892 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 18:11:14,892 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 18:11:14,892 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 18:11:14,893 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 18:11:14,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-24 18:11:14,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-24 18:11:14,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-24 18:11:14,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-24 18:11:14,904 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-24 18:11:14,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, REOPEN/MOVE 2023-07-24 18:11:14,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 18:11:14,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=18, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,34741,1690222239908, splitWal=true, meta=false 2023-07-24 18:11:14,905 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=19, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-24 18:11:14,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:11:14,906 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:11:14,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=26, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-24 18:11:14,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=27, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:11:14,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=48, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:11:14,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=69, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-24 18:11:14,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=70, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE 2023-07-24 18:11:14,908 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=73, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:14,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:11:14,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=77, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:11:14,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=80, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-24 18:11:14,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=81, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:14,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:11:14,910 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=85, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:11:14,910 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-24 18:11:14,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=89, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:11:14,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=92, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-24 18:11:14,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=93, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-24 18:11:14,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1690222254621 type: FLUSH version: 2 ttl: 0 ) 2023-07-24 18:11:14,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=97, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:11:14,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-24 18:11:14,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:11:14,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-24 18:11:14,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=105, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-24 18:11:14,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=106, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:11:14,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:11:14,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:11:14,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=113, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-24 18:11:14,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=114, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-24 18:11:14,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=115, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,37467,1690222246245, splitWal=true, meta=false 2023-07-24 18:11:14,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=116, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,35913,1690222239741, splitWal=true, meta=false 2023-07-24 18:11:14,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=117, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,43449,1690222239527, splitWal=true, meta=false 2023-07-24 18:11:14,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=118, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,41915,1690222243305, splitWal=true, meta=true 2023-07-24 18:11:14,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=125, state=SUCCESS; CreateTableProcedure table=hbase:quota 2023-07-24 18:11:14,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 22 msec 2023-07-24 18:11:14,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 18:11:14,917 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-24 18:11:14,918 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase4.apache.org,37389,1690222261512, table=hbase:meta, region=1588230740 2023-07-24 18:11:14,919 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2023-07-24 18:11:14,922 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37389,1690222261512 already deleted, retry=false 2023-07-24 18:11:14,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,37389,1690222261512 on jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,926 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=128, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,37389,1690222261512, splitWal=true, meta=true 2023-07-24 18:11:14,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=128 for jenkins-hbase4.apache.org,37389,1690222261512 (carryingMeta=true) jenkins-hbase4.apache.org,37389,1690222261512/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@11cf1f62[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 18:11:14,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35553,1690222261840 already deleted, retry=false 2023-07-24 18:11:14,928 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,35553,1690222261840 on jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,929 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,35553,1690222261840, splitWal=true, meta=false 2023-07-24 18:11:14,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=129 for jenkins-hbase4.apache.org,35553,1690222261840 (carryingMeta=false) jenkins-hbase4.apache.org,35553,1690222261840/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4ce525c8[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 18:11:14,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35775,1690222261683 already deleted, retry=false 2023-07-24 18:11:14,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,35775,1690222261683 on jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,931 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=130, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,35775,1690222261683, splitWal=true, meta=false 2023-07-24 18:11:14,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=130 for jenkins-hbase4.apache.org,35775,1690222261683 (carryingMeta=false) jenkins-hbase4.apache.org,35775,1690222261683/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@5896f431[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 18:11:14,931 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-24 18:11:14,932 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 18:11:14,932 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 18:11:14,933 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 18:11:14,933 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 18:11:14,934 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 18:11:14,936 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:14,936 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:14,936 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:14,936 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:14,936 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:14,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33035,1690222274007, sessionid=0x101988716b4001c, setting cluster-up flag (Was=false) 2023-07-24 18:11:14,945 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 18:11:14,945 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 18:11:14,949 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:14,949 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/.hbase-snapshot/.tmp 2023-07-24 18:11:14,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 18:11:14,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 18:11:14,952 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-24 18:11:14,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 18:11:14,955 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:14,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 18:11:14,960 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:14,961 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:37389 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:37389 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:14,962 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:37389 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:37389 2023-07-24 18:11:14,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:11:14,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:11:14,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:11:14,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:11:14,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:14,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:14,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:14,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:14,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 18:11:14,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:14,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:14,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:14,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690222304979 2023-07-24 18:11:14,979 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 18:11:14,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 18:11:14,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 18:11:14,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 18:11:14,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 18:11:14,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 18:11:14,985 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:14,986 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37389,1690222261512; numProcessing=1 2023-07-24 18:11:14,986 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35775,1690222261683; numProcessing=2 2023-07-24 18:11:14,986 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=128, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,37389,1690222261512, splitWal=true, meta=true 2023-07-24 18:11:14,986 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=130, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,35775,1690222261683, splitWal=true, meta=false 2023-07-24 18:11:14,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 18:11:14,986 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35553,1690222261840; numProcessing=3 2023-07-24 18:11:14,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 18:11:14,986 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=129, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,35553,1690222261840, splitWal=true, meta=false 2023-07-24 18:11:14,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 18:11:14,987 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=128, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,37389,1690222261512, splitWal=true, meta=true, isMeta: true 2023-07-24 18:11:14,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 18:11:14,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 18:11:14,989 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512-splitting 2023-07-24 18:11:14,990 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512-splitting dir is empty, no logs to split. 2023-07-24 18:11:14,990 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,37389,1690222261512 WAL count=0, meta=true 2023-07-24 18:11:14,992 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512-splitting dir is empty, no logs to split. 2023-07-24 18:11:14,992 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,37389,1690222261512 WAL count=0, meta=true 2023-07-24 18:11:14,992 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,37389,1690222261512 WAL splitting is done? wals=0, meta=true 2023-07-24 18:11:14,993 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 18:11:14,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222274988,5,FailOnTimeoutGroup] 2023-07-24 18:11:14,995 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 18:11:14,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222274995,5,FailOnTimeoutGroup] 2023-07-24 18:11:14,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:14,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 18:11:14,996 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:14,996 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:14,996 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690222274996, completionTime=-1 2023-07-24 18:11:14,996 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-24 18:11:14,996 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-24 18:11:14,996 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 18:11:15,033 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:11:15,034 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:11:15,034 DEBUG [RS:1;jenkins-hbase4:46835] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:15,034 INFO [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:11:15,034 DEBUG [RS:0;jenkins-hbase4:41163] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:15,034 DEBUG [RS:2;jenkins-hbase4:41941] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:15,036 DEBUG [RS:1;jenkins-hbase4:46835] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:15,036 DEBUG [RS:1;jenkins-hbase4:46835] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:15,036 DEBUG [RS:0;jenkins-hbase4:41163] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:15,036 DEBUG [RS:0;jenkins-hbase4:41163] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:15,037 DEBUG [RS:2;jenkins-hbase4:41941] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:15,037 DEBUG [RS:2;jenkins-hbase4:41941] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:15,040 DEBUG [RS:1;jenkins-hbase4:46835] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:15,041 DEBUG [RS:1;jenkins-hbase4:46835] zookeeper.ReadOnlyZKClient(139): Connect 0x32c5a4cc to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:15,041 DEBUG [RS:0;jenkins-hbase4:41163] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:15,041 DEBUG [RS:2;jenkins-hbase4:41941] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:15,044 DEBUG [RS:0;jenkins-hbase4:41163] zookeeper.ReadOnlyZKClient(139): Connect 0x07b0f986 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:15,044 DEBUG [RS:2;jenkins-hbase4:41941] zookeeper.ReadOnlyZKClient(139): Connect 0x0e8c6183 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:15,053 DEBUG [RS:1;jenkins-hbase4:46835] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c024ce4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:15,053 DEBUG [RS:1;jenkins-hbase4:46835] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@383dcfbc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:15,055 DEBUG [RS:2;jenkins-hbase4:41941] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b9e22, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:15,055 DEBUG [RS:2;jenkins-hbase4:41941] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4740af3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:15,060 DEBUG [RS:0;jenkins-hbase4:41163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@9771700, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:15,060 DEBUG [RS:0;jenkins-hbase4:41163] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@d4892f6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:15,064 DEBUG [RS:2;jenkins-hbase4:41941] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41941 2023-07-24 18:11:15,064 INFO [RS:2;jenkins-hbase4:41941] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:15,064 INFO [RS:2;jenkins-hbase4:41941] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:15,064 DEBUG [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:15,064 DEBUG [RS:1;jenkins-hbase4:46835] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46835 2023-07-24 18:11:15,065 INFO [RS:1;jenkins-hbase4:46835] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:15,065 INFO [RS:1;jenkins-hbase4:46835] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:15,065 DEBUG [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:15,065 INFO [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33035,1690222274007 with isa=jenkins-hbase4.apache.org/172.31.14.131:41941, startcode=1690222274544 2023-07-24 18:11:15,065 DEBUG [RS:2;jenkins-hbase4:41941] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:15,065 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:37389 this server is in the failed servers list 2023-07-24 18:11:15,065 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33035,1690222274007 with isa=jenkins-hbase4.apache.org/172.31.14.131:46835, startcode=1690222274357 2023-07-24 18:11:15,065 DEBUG [RS:1;jenkins-hbase4:46835] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:15,067 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42339, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:15,067 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38441, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:15,068 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33035] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,069 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:15,070 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 18:11:15,070 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33035] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:15,070 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:15,070 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 18:11:15,070 DEBUG [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:11:15,070 DEBUG [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:11:15,070 DEBUG [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40867 2023-07-24 18:11:15,071 DEBUG [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:11:15,071 DEBUG [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:11:15,071 DEBUG [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40867 2023-07-24 18:11:15,072 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:15,073 DEBUG [RS:2;jenkins-hbase4:41941] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:15,073 WARN [RS:2;jenkins-hbase4:41941] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:15,073 INFO [RS:2;jenkins-hbase4:41941] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:15,073 DEBUG [RS:1;jenkins-hbase4:46835] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,073 DEBUG [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:15,073 WARN [RS:1;jenkins-hbase4:46835] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:15,073 INFO [RS:1;jenkins-hbase4:46835] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:15,074 DEBUG [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,075 DEBUG [RS:0;jenkins-hbase4:41163] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:41163 2023-07-24 18:11:15,075 INFO [RS:0;jenkins-hbase4:41163] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:15,075 INFO [RS:0;jenkins-hbase4:41163] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:15,075 DEBUG [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:15,076 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33035,1690222274007 with isa=jenkins-hbase4.apache.org/172.31.14.131:41163, startcode=1690222274180 2023-07-24 18:11:15,076 DEBUG [RS:0;jenkins-hbase4:41163] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:15,078 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34775, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:15,078 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33035] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:15,078 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:15,078 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 18:11:15,079 DEBUG [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:11:15,079 DEBUG [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:11:15,079 DEBUG [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40867 2023-07-24 18:11:15,082 DEBUG [RS:0;jenkins-hbase4:41163] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:15,082 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41941,1690222274544] 2023-07-24 18:11:15,082 WARN [RS:0;jenkins-hbase4:41163] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:15,082 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46835,1690222274357] 2023-07-24 18:11:15,083 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41163,1690222274180] 2023-07-24 18:11:15,083 INFO [RS:0;jenkins-hbase4:41163] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:15,084 DEBUG [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:15,086 DEBUG [RS:1;jenkins-hbase4:46835] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,086 DEBUG [RS:2;jenkins-hbase4:41941] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,087 DEBUG [RS:1;jenkins-hbase4:46835] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:15,087 DEBUG [RS:2;jenkins-hbase4:41941] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:15,087 DEBUG [RS:1;jenkins-hbase4:46835] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:15,087 DEBUG [RS:2;jenkins-hbase4:41941] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:15,088 DEBUG [RS:1;jenkins-hbase4:46835] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:15,088 DEBUG [RS:2;jenkins-hbase4:41941] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:15,088 INFO [RS:1;jenkins-hbase4:46835] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:15,088 INFO [RS:2;jenkins-hbase4:41941] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:15,095 INFO [RS:2;jenkins-hbase4:41941] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:15,095 INFO [RS:1;jenkins-hbase4:46835] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:15,095 INFO [RS:2;jenkins-hbase4:41941] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:15,095 INFO [RS:2;jenkins-hbase4:41941] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,096 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=100ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-24 18:11:15,099 INFO [RS:1;jenkins-hbase4:46835] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:15,099 INFO [RS:1;jenkins-hbase4:46835] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,101 INFO [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:15,103 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:15,104 INFO [RS:2;jenkins-hbase4:41941] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,104 INFO [RS:1;jenkins-hbase4:46835] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,104 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,104 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,104 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,104 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,104 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,104 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,104 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,104 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:15,105 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:15,105 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:2;jenkins-hbase4:41941] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,105 DEBUG [RS:1;jenkins-hbase4:46835] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,106 DEBUG [RS:0;jenkins-hbase4:41163] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,106 DEBUG [RS:0;jenkins-hbase4:41163] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:15,106 DEBUG [RS:0;jenkins-hbase4:41163] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:15,111 INFO [RS:2;jenkins-hbase4:41941] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,111 INFO [RS:2;jenkins-hbase4:41941] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,111 INFO [RS:2;jenkins-hbase4:41941] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,115 INFO [RS:1;jenkins-hbase4:46835] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,116 INFO [RS:1;jenkins-hbase4:46835] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,116 INFO [RS:1;jenkins-hbase4:46835] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,116 DEBUG [RS:0;jenkins-hbase4:41163] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:15,117 INFO [RS:0;jenkins-hbase4:41163] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:15,122 INFO [RS:0;jenkins-hbase4:41163] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:15,123 INFO [RS:0;jenkins-hbase4:41163] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:15,123 INFO [RS:0;jenkins-hbase4:41163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,126 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:15,128 INFO [RS:0;jenkins-hbase4:41163] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,128 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,128 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,128 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,128 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,128 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,129 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:15,129 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,129 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,129 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,129 DEBUG [RS:0;jenkins-hbase4:41163] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:15,132 INFO [RS:2;jenkins-hbase4:41941] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:15,132 INFO [RS:2;jenkins-hbase4:41941] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41941,1690222274544-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,134 INFO [RS:0;jenkins-hbase4:41163] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,134 INFO [RS:0;jenkins-hbase4:41163] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,134 INFO [RS:0;jenkins-hbase4:41163] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,134 INFO [RS:1;jenkins-hbase4:46835] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:15,134 INFO [RS:1;jenkins-hbase4:46835] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46835,1690222274357-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,146 DEBUG [jenkins-hbase4:33035] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 18:11:15,147 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:15,147 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:15,147 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:15,147 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:15,147 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:11:15,149 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46835,1690222274357, state=OPENING 2023-07-24 18:11:15,151 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:11:15,151 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:11:15,151 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=131, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46835,1690222274357}] 2023-07-24 18:11:15,152 INFO [RS:0;jenkins-hbase4:41163] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:15,153 INFO [RS:0;jenkins-hbase4:41163] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41163,1690222274180-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:15,153 INFO [RS:2;jenkins-hbase4:41941] regionserver.Replication(203): jenkins-hbase4.apache.org,41941,1690222274544 started 2023-07-24 18:11:15,154 INFO [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41941,1690222274544, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41941, sessionid=0x101988716b4001f 2023-07-24 18:11:15,154 DEBUG [RS:2;jenkins-hbase4:41941] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:15,154 DEBUG [RS:2;jenkins-hbase4:41941] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:15,154 DEBUG [RS:2;jenkins-hbase4:41941] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41941,1690222274544' 2023-07-24 18:11:15,154 DEBUG [RS:2;jenkins-hbase4:41941] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:15,155 DEBUG [RS:2;jenkins-hbase4:41941] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:15,155 DEBUG [RS:2;jenkins-hbase4:41941] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:15,155 DEBUG [RS:2;jenkins-hbase4:41941] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:15,155 DEBUG [RS:2;jenkins-hbase4:41941] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:15,155 DEBUG [RS:2;jenkins-hbase4:41941] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41941,1690222274544' 2023-07-24 18:11:15,155 DEBUG [RS:2;jenkins-hbase4:41941] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:15,156 DEBUG [RS:2;jenkins-hbase4:41941] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:15,156 DEBUG [RS:2;jenkins-hbase4:41941] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:15,156 INFO [RS:2;jenkins-hbase4:41941] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:11:15,156 INFO [RS:2;jenkins-hbase4:41941] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:11:15,162 INFO [RS:1;jenkins-hbase4:46835] regionserver.Replication(203): jenkins-hbase4.apache.org,46835,1690222274357 started 2023-07-24 18:11:15,163 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46835,1690222274357, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46835, sessionid=0x101988716b4001e 2023-07-24 18:11:15,163 DEBUG [RS:1;jenkins-hbase4:46835] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:15,163 DEBUG [RS:1;jenkins-hbase4:46835] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,163 DEBUG [RS:1;jenkins-hbase4:46835] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46835,1690222274357' 2023-07-24 18:11:15,163 DEBUG [RS:1;jenkins-hbase4:46835] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:15,163 DEBUG [RS:1;jenkins-hbase4:46835] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:15,163 DEBUG [RS:1;jenkins-hbase4:46835] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:15,163 DEBUG [RS:1;jenkins-hbase4:46835] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:15,164 DEBUG [RS:1;jenkins-hbase4:46835] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,164 DEBUG [RS:1;jenkins-hbase4:46835] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46835,1690222274357' 2023-07-24 18:11:15,164 DEBUG [RS:1;jenkins-hbase4:46835] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:15,164 DEBUG [RS:1;jenkins-hbase4:46835] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:15,164 DEBUG [RS:1;jenkins-hbase4:46835] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:15,164 INFO [RS:1;jenkins-hbase4:46835] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:11:15,164 INFO [RS:1;jenkins-hbase4:46835] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:11:15,167 INFO [RS:0;jenkins-hbase4:41163] regionserver.Replication(203): jenkins-hbase4.apache.org,41163,1690222274180 started 2023-07-24 18:11:15,167 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41163,1690222274180, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41163, sessionid=0x101988716b4001d 2023-07-24 18:11:15,168 DEBUG [RS:0;jenkins-hbase4:41163] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:15,168 DEBUG [RS:0;jenkins-hbase4:41163] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:15,168 DEBUG [RS:0;jenkins-hbase4:41163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41163,1690222274180' 2023-07-24 18:11:15,168 DEBUG [RS:0;jenkins-hbase4:41163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:15,168 DEBUG [RS:0;jenkins-hbase4:41163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:15,168 DEBUG [RS:0;jenkins-hbase4:41163] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:15,168 DEBUG [RS:0;jenkins-hbase4:41163] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:15,168 DEBUG [RS:0;jenkins-hbase4:41163] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:15,169 DEBUG [RS:0;jenkins-hbase4:41163] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41163,1690222274180' 2023-07-24 18:11:15,169 DEBUG [RS:0;jenkins-hbase4:41163] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:15,169 DEBUG [RS:0;jenkins-hbase4:41163] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:15,169 DEBUG [RS:0;jenkins-hbase4:41163] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:15,169 INFO [RS:0;jenkins-hbase4:41163] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:11:15,169 INFO [RS:0;jenkins-hbase4:41163] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:11:15,259 INFO [RS:2;jenkins-hbase4:41941] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41941%2C1690222274544, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41941,1690222274544, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:11:15,267 INFO [RS:1;jenkins-hbase4:46835] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46835%2C1690222274357, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,46835,1690222274357, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:11:15,267 WARN [ReadOnlyZKClient-127.0.0.1:59012@0x5236a4f8] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 18:11:15,267 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:15,269 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39870, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:15,269 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46835] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:39870 deadline: 1690222335269, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,274 INFO [RS:0;jenkins-hbase4:41163] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41163%2C1690222274180, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41163,1690222274180, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:11:15,276 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:15,277 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:15,278 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:15,284 INFO [RS:2;jenkins-hbase4:41941] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41941,1690222274544/jenkins-hbase4.apache.org%2C41941%2C1690222274544.1690222275259 2023-07-24 18:11:15,284 DEBUG [RS:2;jenkins-hbase4:41941] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK]] 2023-07-24 18:11:15,296 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:15,296 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:15,296 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:15,304 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:15,304 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:15,305 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:15,311 INFO [RS:1;jenkins-hbase4:46835] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,46835,1690222274357/jenkins-hbase4.apache.org%2C46835%2C1690222274357.1690222275267 2023-07-24 18:11:15,315 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:15,315 INFO [RS:0;jenkins-hbase4:41163] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,41163,1690222274180/jenkins-hbase4.apache.org%2C41163%2C1690222274180.1690222275274 2023-07-24 18:11:15,317 DEBUG [RS:1;jenkins-hbase4:46835] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK]] 2023-07-24 18:11:15,320 DEBUG [RS:0;jenkins-hbase4:41163] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK]] 2023-07-24 18:11:15,320 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:11:15,321 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39876, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:11:15,325 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 18:11:15,325 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:15,327 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46835%2C1690222274357.meta, suffix=.meta, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,46835,1690222274357, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:11:15,344 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:15,344 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:15,344 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:15,351 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,46835,1690222274357/jenkins-hbase4.apache.org%2C46835%2C1690222274357.meta.1690222275328.meta 2023-07-24 18:11:15,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK], DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK]] 2023-07-24 18:11:15,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:15,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:11:15,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 18:11:15,354 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 18:11:15,354 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 18:11:15,354 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:15,354 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 18:11:15,354 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 18:11:15,358 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:11:15,359 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info 2023-07-24 18:11:15,359 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info 2023-07-24 18:11:15,359 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:11:15,368 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/a79e17af74e44f32952a7d071379d76d 2023-07-24 18:11:15,376 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/b7efcf27a4234e8cb81fe70d74c707cd 2023-07-24 18:11:15,385 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f7f4dbb0133a4183b89b4fe6e9566541 2023-07-24 18:11:15,385 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/f7f4dbb0133a4183b89b4fe6e9566541 2023-07-24 18:11:15,386 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:15,386 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:11:15,387 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:11:15,387 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:11:15,387 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:11:15,394 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a5e22ad1da244f1a956859232c6e5f1 2023-07-24 18:11:15,394 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/rep_barrier/3a5e22ad1da244f1a956859232c6e5f1 2023-07-24 18:11:15,394 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:15,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:11:15,396 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table 2023-07-24 18:11:15,396 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table 2023-07-24 18:11:15,396 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:11:15,404 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/230c7749dda64496b1ef6916ca5f4650 2023-07-24 18:11:15,413 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/ed4eee4aebd4497b91a21f8f303e8b08 2023-07-24 18:11:15,422 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fde3e8b12951484eaef87586119cf207 2023-07-24 18:11:15,422 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/fde3e8b12951484eaef87586119cf207 2023-07-24 18:11:15,422 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:15,423 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:11:15,425 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740 2023-07-24 18:11:15,428 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:11:15,429 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:11:15,430 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=167; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11129882080, jitterRate=0.03655104339122772}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:11:15,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:11:15,433 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=132, masterSystemTime=1690222275315 2023-07-24 18:11:15,438 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-24 18:11:15,440 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=1), splitQueue=0 2023-07-24 18:11:15,441 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-24 18:11:15,441 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-24 18:11:15,446 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16769 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-24 18:11:15,456 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 18:11:15,456 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] regionserver.HStore(1912): 1588230740/table is initiating minor compaction (all files) 2023-07-24 18:11:15,457 INFO [RS:1;jenkins-hbase4:46835-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/table in hbase:meta,,1.1588230740 2023-07-24 18:11:15,457 INFO [RS:1;jenkins-hbase4:46835-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/ed4eee4aebd4497b91a21f8f303e8b08, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/fde3e8b12951484eaef87586119cf207, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/230c7749dda64496b1ef6916ca5f4650] into tmpdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp, totalSize=16.4 K 2023-07-24 18:11:15,457 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46835,1690222274357, state=OPEN 2023-07-24 18:11:15,457 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 18:11:15,458 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 26100 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-24 18:11:15,459 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HStore(1912): 1588230740/info is initiating minor compaction (all files) 2023-07-24 18:11:15,459 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/info in hbase:meta,,1.1588230740 2023-07-24 18:11:15,459 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] compactions.Compactor(207): Compacting ed4eee4aebd4497b91a21f8f303e8b08, keycount=4, bloomtype=NONE, size=4.8 K, encoding=NONE, compression=NONE, seqNum=15, earliestPutTs=1690222242452 2023-07-24 18:11:15,459 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/b7efcf27a4234e8cb81fe70d74c707cd, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/f7f4dbb0133a4183b89b4fe6e9566541, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/a79e17af74e44f32952a7d071379d76d] into tmpdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp, totalSize=25.5 K 2023-07-24 18:11:15,459 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:11:15,460 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:11:15,460 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] compactions.Compactor(207): Compacting fde3e8b12951484eaef87586119cf207, keycount=23, bloomtype=NONE, size=7.0 K, encoding=NONE, compression=NONE, seqNum=148, earliestPutTs=9223372036854775807 2023-07-24 18:11:15,460 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.Compactor(207): Compacting b7efcf27a4234e8cb81fe70d74c707cd, keycount=21, bloomtype=NONE, size=7.1 K, encoding=NONE, compression=NONE, seqNum=15, earliestPutTs=1690222242410 2023-07-24 18:11:15,461 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] compactions.Compactor(207): Compacting 230c7749dda64496b1ef6916ca5f4650, keycount=2, bloomtype=NONE, size=4.7 K, encoding=NONE, compression=NONE, seqNum=163, earliestPutTs=1690222267909 2023-07-24 18:11:15,461 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.Compactor(207): Compacting f7f4dbb0133a4183b89b4fe6e9566541, keycount=53, bloomtype=NONE, size=10.7 K, encoding=NONE, compression=NONE, seqNum=148, earliestPutTs=1690222244895 2023-07-24 18:11:15,462 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=131 2023-07-24 18:11:15,462 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.Compactor(207): Compacting a79e17af74e44f32952a7d071379d76d, keycount=26, bloomtype=NONE, size=7.7 K, encoding=NONE, compression=NONE, seqNum=163, earliestPutTs=1690222266866 2023-07-24 18:11:15,462 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=131, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46835,1690222274357 in 309 msec 2023-07-24 18:11:15,468 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=128 2023-07-24 18:11:15,468 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=128, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 469 msec 2023-07-24 18:11:15,483 INFO [RS:1;jenkins-hbase4:46835-longCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#table#compaction#14 average throughput is 0.26 MB/second, slept 0 time(s) and total slept time is 0 ms. 1 active operations remaining, total limit is 50.00 MB/second 2023-07-24 18:11:15,484 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#info#compaction#15 average throughput is 2.59 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-24 18:11:15,512 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/info/2447638a91794b7dbfc6b1f0d46ad970 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/2447638a91794b7dbfc6b1f0d46ad970 2023-07-24 18:11:15,512 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/table/05e0ba7f732c448ea955114a2f3586fa as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/05e0ba7f732c448ea955114a2f3586fa 2023-07-24 18:11:15,533 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 18:11:15,533 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 18:11:15,537 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/info of 1588230740 into 2447638a91794b7dbfc6b1f0d46ad970(size=10.1 K), total size for store is 10.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-24 18:11:15,537 INFO [RS:1;jenkins-hbase4:46835-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/table of 1588230740 into 05e0ba7f732c448ea955114a2f3586fa(size=4.9 K), total size for store is 4.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-24 18:11:15,537 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-24 18:11:15,537 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-24 18:11:15,537 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/info, priority=13, startTime=1690222275436; duration=0sec 2023-07-24 18:11:15,537 INFO [RS:1;jenkins-hbase4:46835-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/table, priority=13, startTime=1690222275440; duration=0sec 2023-07-24 18:11:15,538 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-24 18:11:15,538 DEBUG [RS:1;jenkins-hbase4:46835-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-24 18:11:15,583 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:15,583 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:35553 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:35553 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:15,584 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:35553 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:35553 2023-07-24 18:11:15,688 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:35553 this server is in the failed servers list 2023-07-24 18:11:15,894 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:35553 this server is in the failed servers list 2023-07-24 18:11:16,200 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:35553 this server is in the failed servers list 2023-07-24 18:11:16,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1611ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1511ms 2023-07-24 18:11:16,704 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:35553 this server is in the failed servers list 2023-07-24 18:11:17,121 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-24 18:11:17,712 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 18:11:17,718 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:35553 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:35553 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:17,720 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:35553 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:35553 2023-07-24 18:11:18,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3121ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3021ms 2023-07-24 18:11:19,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4524ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-24 18:11:19,520 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 18:11:19,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,37389,1690222261512, regionLocation=jenkins-hbase4.apache.org,37389,1690222261512, openSeqNum=21 2023-07-24 18:11:19,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=785da8c92abeb2f759b91756349c6ee1, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,37389,1690222261512, regionLocation=jenkins-hbase4.apache.org,37389,1690222261512, openSeqNum=2 2023-07-24 18:11:19,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=f93db382913b37f9661cac1fd8ee01a9, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,35553,1690222261840, regionLocation=jenkins-hbase4.apache.org,35553,1690222261840, openSeqNum=77 2023-07-24 18:11:19,523 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 18:11:19,523 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690222339523 2023-07-24 18:11:19,523 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690222399523 2023-07-24 18:11:19,523 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-24 18:11:19,539 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,37389,1690222261512 had 3 regions 2023-07-24 18:11:19,539 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,35553,1690222261840 had 1 regions 2023-07-24 18:11:19,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33035,1690222274007-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:19,539 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,35775,1690222261683 had 0 regions 2023-07-24 18:11:19,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33035,1690222274007-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:19,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33035,1690222274007-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:19,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33035, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:19,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:19,539 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. is NOT online; state={b3e0fb36cbe9750f5f2b47d078547932 state=OPEN, ts=1690222279523, server=jenkins-hbase4.apache.org,37389,1690222261512}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-24 18:11:19,540 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=129, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,35553,1690222261840, splitWal=true, meta=false, isMeta: false 2023-07-24 18:11:19,540 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=130, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,35775,1690222261683, splitWal=true, meta=false, isMeta: false 2023-07-24 18:11:19,541 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=128, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,37389,1690222261512, splitWal=true, meta=true, isMeta: false 2023-07-24 18:11:19,542 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35553,1690222261840-splitting 2023-07-24 18:11:19,543 WARN [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase4.apache.org,37389,1690222261512/hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932., unknown_server=jenkins-hbase4.apache.org,37389,1690222261512/hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1., unknown_server=jenkins-hbase4.apache.org,35553,1690222261840/hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:19,544 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35553,1690222261840-splitting dir is empty, no logs to split. 2023-07-24 18:11:19,544 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,35553,1690222261840 WAL count=0, meta=false 2023-07-24 18:11:19,544 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35775,1690222261683-splitting 2023-07-24 18:11:19,545 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35775,1690222261683-splitting dir is empty, no logs to split. 2023-07-24 18:11:19,545 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,35775,1690222261683 WAL count=0, meta=false 2023-07-24 18:11:19,546 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512-splitting dir is empty, no logs to split. 2023-07-24 18:11:19,546 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,37389,1690222261512 WAL count=0, meta=false 2023-07-24 18:11:19,547 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35553,1690222261840-splitting dir is empty, no logs to split. 2023-07-24 18:11:19,547 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,35553,1690222261840 WAL count=0, meta=false 2023-07-24 18:11:19,547 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,35553,1690222261840 WAL splitting is done? wals=0, meta=false 2023-07-24 18:11:19,548 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35775,1690222261683-splitting dir is empty, no logs to split. 2023-07-24 18:11:19,548 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,35775,1690222261683 WAL count=0, meta=false 2023-07-24 18:11:19,548 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,35775,1690222261683 WAL splitting is done? wals=0, meta=false 2023-07-24 18:11:19,550 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,35553,1690222261840 failed, ignore...File hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35553,1690222261840-splitting does not exist. 2023-07-24 18:11:19,551 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN}] 2023-07-24 18:11:19,551 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,35775,1690222261683 failed, ignore...File hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,35775,1690222261683-splitting does not exist. 2023-07-24 18:11:19,551 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,37389,1690222261512-splitting dir is empty, no logs to split. 2023-07-24 18:11:19,551 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,37389,1690222261512 WAL count=0, meta=false 2023-07-24 18:11:19,551 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,37389,1690222261512 WAL splitting is done? wals=0, meta=false 2023-07-24 18:11:19,552 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN 2023-07-24 18:11:19,552 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,35775,1690222261683 after splitting done 2023-07-24 18:11:19,552 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,35775,1690222261683 from processing; numProcessing=2 2023-07-24 18:11:19,552 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN}, {pid=135, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=785da8c92abeb2f759b91756349c6ee1, ASSIGN}] 2023-07-24 18:11:19,552 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 18:11:19,553 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN 2023-07-24 18:11:19,554 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=785da8c92abeb2f759b91756349c6ee1, ASSIGN 2023-07-24 18:11:19,554 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,35775,1690222261683, splitWal=true, meta=false in 4.6220 sec 2023-07-24 18:11:19,554 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 18:11:19,554 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=785da8c92abeb2f759b91756349c6ee1, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-24 18:11:19,554 DEBUG [jenkins-hbase4:33035] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 18:11:19,555 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:19,555 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:19,555 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:19,555 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:19,555 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-24 18:11:19,558 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:19,558 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:19,558 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222279557"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222279557"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222279557"}]},"ts":"1690222279557"} 2023-07-24 18:11:19,558 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222279557"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222279557"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222279557"}]},"ts":"1690222279557"} 2023-07-24 18:11:19,560 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=134, state=RUNNABLE; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,41941,1690222274544}] 2023-07-24 18:11:19,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=133, state=RUNNABLE; OpenRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,46835,1690222274357}] 2023-07-24 18:11:19,707 DEBUG [jenkins-hbase4:33035] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 18:11:19,708 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:19,708 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:19,708 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:19,708 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:19,708 DEBUG [jenkins-hbase4:33035] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:11:19,710 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=785da8c92abeb2f759b91756349c6ee1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:19,710 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690222279710"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222279710"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222279710"}]},"ts":"1690222279710"} 2023-07-24 18:11:19,711 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=135, state=RUNNABLE; OpenRegionProcedure 785da8c92abeb2f759b91756349c6ee1, server=jenkins-hbase4.apache.org,46835,1690222274357}] 2023-07-24 18:11:19,713 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:19,713 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:11:19,715 INFO [RS-EventLoopGroup-16-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60242, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:11:19,721 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:19,721 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3e0fb36cbe9750f5f2b47d078547932, NAME => 'hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:19,722 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:19,722 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:19,722 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:19,722 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:19,722 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:19,723 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f93db382913b37f9661cac1fd8ee01a9, NAME => 'hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:19,723 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:11:19,723 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. service=MultiRowMutationService 2023-07-24 18:11:19,723 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:19,723 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 18:11:19,723 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:19,723 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:19,724 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:19,724 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:19,724 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:11:19,725 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:11:19,725 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3e0fb36cbe9750f5f2b47d078547932 columnFamilyName info 2023-07-24 18:11:19,726 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:19,727 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m 2023-07-24 18:11:19,727 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m 2023-07-24 18:11:19,727 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f93db382913b37f9661cac1fd8ee01a9 columnFamilyName m 2023-07-24 18:11:19,733 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:11:19,733 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:11:19,736 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:35553 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:35553 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:19,740 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:35553 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:35553 2023-07-24 18:11:19,740 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4162 ms ago, cancelled=false, msg=Call to address=jenkins-hbase4.apache.org/172.31.14.131:35553 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:35553, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9., hostname=jenkins-hbase4.apache.org,35553,1690222261840, seqNum=77, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase4.apache.org/172.31.14.131:35553 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:35553 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:35553 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:11:19,741 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/99881a762fd443059bf23593fedbb752 2023-07-24 18:11:19,741 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(310): Store=b3e0fb36cbe9750f5f2b47d078547932/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:19,742 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c5e564844d934f86b57f8f0aadc04422 2023-07-24 18:11:19,742 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:19,742 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/c5e564844d934f86b57f8f0aadc04422 2023-07-24 18:11:19,744 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:19,746 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:19,747 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d08a5ba50b5c4cb6b3b0378bbcc621b6 2023-07-24 18:11:19,747 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3e0fb36cbe9750f5f2b47d078547932; next sequenceid=24; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10205306400, jitterRate=-0.04955677688121796}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:19,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:11:19,748 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932., pid=136, masterSystemTime=1690222279713 2023-07-24 18:11:19,752 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:19,753 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:19,753 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPEN, openSeqNum=24, regionLocation=jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:19,754 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222279753"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222279753"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222279753"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222279753"}]},"ts":"1690222279753"} 2023-07-24 18:11:19,756 DEBUG [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d5cd966a907b4e6e86b91fb7d6889add 2023-07-24 18:11:19,756 INFO [StoreOpener-f93db382913b37f9661cac1fd8ee01a9-1] regionserver.HStore(310): Store=f93db382913b37f9661cac1fd8ee01a9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:19,757 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=134 2023-07-24 18:11:19,757 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=134, state=SUCCESS; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,41941,1690222274544 in 195 msec 2023-07-24 18:11:19,757 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:19,758 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=128, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, ASSIGN in 205 msec 2023-07-24 18:11:19,758 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:19,761 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:19,762 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f93db382913b37f9661cac1fd8ee01a9; next sequenceid=84; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@9410285, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:19,762 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:11:19,763 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9., pid=137, masterSystemTime=1690222279713 2023-07-24 18:11:19,763 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-24 18:11:19,764 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-24 18:11:19,766 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16056 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-24 18:11:19,766 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HStore(1912): f93db382913b37f9661cac1fd8ee01a9/m is initiating minor compaction (all files) 2023-07-24 18:11:19,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:19,766 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of f93db382913b37f9661cac1fd8ee01a9/m in hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:19,766 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:19,766 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=f93db382913b37f9661cac1fd8ee01a9, regionState=OPEN, openSeqNum=84, regionLocation=jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:19,766 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d5cd966a907b4e6e86b91fb7d6889add, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/c5e564844d934f86b57f8f0aadc04422, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d08a5ba50b5c4cb6b3b0378bbcc621b6] into tmpdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp, totalSize=15.7 K 2023-07-24 18:11:19,766 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222279766"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222279766"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222279766"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222279766"}]},"ts":"1690222279766"} 2023-07-24 18:11:19,767 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.Compactor(207): Compacting d5cd966a907b4e6e86b91fb7d6889add, keycount=3, bloomtype=ROW, size=5.1 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1690222243696 2023-07-24 18:11:19,767 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.Compactor(207): Compacting c5e564844d934f86b57f8f0aadc04422, keycount=21, bloomtype=ROW, size=5.7 K, encoding=NONE, compression=NONE, seqNum=73, earliestPutTs=1690222258221 2023-07-24 18:11:19,768 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] compactions.Compactor(207): Compacting d08a5ba50b5c4cb6b3b0378bbcc621b6, keycount=2, bloomtype=ROW, size=5.0 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1690222271093 2023-07-24 18:11:19,770 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=133 2023-07-24 18:11:19,770 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=133, state=SUCCESS; OpenRegionProcedure f93db382913b37f9661cac1fd8ee01a9, server=jenkins-hbase4.apache.org,46835,1690222274357 in 207 msec 2023-07-24 18:11:19,771 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=129 2023-07-24 18:11:19,772 INFO [PEWorker-1] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,35553,1690222261840 after splitting done 2023-07-24 18:11:19,772 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f93db382913b37f9661cac1fd8ee01a9, ASSIGN in 219 msec 2023-07-24 18:11:19,772 DEBUG [PEWorker-1] master.DeadServer(114): Removed jenkins-hbase4.apache.org,35553,1690222261840 from processing; numProcessing=1 2023-07-24 18:11:19,773 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,35553,1690222261840, splitWal=true, meta=false in 4.8440 sec 2023-07-24 18:11:19,779 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] throttle.PressureAwareThroughputController(145): f93db382913b37f9661cac1fd8ee01a9#m#compaction#16 average throughput is 0.23 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-24 18:11:19,793 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp/m/61bee1c1553e406d88f012560449f1ea as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/61bee1c1553e406d88f012560449f1ea 2023-07-24 18:11:19,799 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 18:11:19,799 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in f93db382913b37f9661cac1fd8ee01a9/m of f93db382913b37f9661cac1fd8ee01a9 into 61bee1c1553e406d88f012560449f1ea(size=5.1 K), total size for store is 5.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-24 18:11:19,799 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:11:19,799 INFO [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9., storeName=f93db382913b37f9661cac1fd8ee01a9/m, priority=13, startTime=1690222279763; duration=0sec 2023-07-24 18:11:19,800 DEBUG [RS:1;jenkins-hbase4:46835-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-24 18:11:19,870 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:19,870 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 785da8c92abeb2f759b91756349c6ee1, NAME => 'hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:19,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:19,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:19,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:19,871 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:19,873 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:19,875 DEBUG [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/q 2023-07-24 18:11:19,875 DEBUG [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/q 2023-07-24 18:11:19,876 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 785da8c92abeb2f759b91756349c6ee1 columnFamilyName q 2023-07-24 18:11:19,876 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] regionserver.HStore(310): Store=785da8c92abeb2f759b91756349c6ee1/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:19,876 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:19,878 DEBUG [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/u 2023-07-24 18:11:19,878 DEBUG [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/u 2023-07-24 18:11:19,878 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 785da8c92abeb2f759b91756349c6ee1 columnFamilyName u 2023-07-24 18:11:19,880 INFO [StoreOpener-785da8c92abeb2f759b91756349c6ee1-1] regionserver.HStore(310): Store=785da8c92abeb2f759b91756349c6ee1/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:19,881 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:19,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:19,887 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-24 18:11:19,889 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:19,890 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 785da8c92abeb2f759b91756349c6ee1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9623445920, jitterRate=-0.10374675691127777}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-24 18:11:19,890 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 785da8c92abeb2f759b91756349c6ee1: 2023-07-24 18:11:19,890 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1., pid=138, masterSystemTime=1690222279864 2023-07-24 18:11:19,899 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:19,899 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:19,903 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=785da8c92abeb2f759b91756349c6ee1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:19,903 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690222279903"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222279903"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222279903"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222279903"}]},"ts":"1690222279903"} 2023-07-24 18:11:19,907 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=135 2023-07-24 18:11:19,907 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; OpenRegionProcedure 785da8c92abeb2f759b91756349c6ee1, server=jenkins-hbase4.apache.org,46835,1690222274357 in 194 msec 2023-07-24 18:11:19,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=128 2023-07-24 18:11:19,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=128, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=785da8c92abeb2f759b91756349c6ee1, ASSIGN in 355 msec 2023-07-24 18:11:19,909 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,37389,1690222261512 after splitting done 2023-07-24 18:11:19,910 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase4.apache.org,37389,1690222261512 from processing; numProcessing=0 2023-07-24 18:11:19,912 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=128, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,37389,1690222261512, splitWal=true, meta=true in 4.9880 sec 2023-07-24 18:11:20,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-24 18:11:20,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:20,548 INFO [RS-EventLoopGroup-16-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55572, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:20,567 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 18:11:20,570 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 18:11:20,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.816sec 2023-07-24 18:11:20,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 18:11:20,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 18:11:20,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 18:11:20,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33035,1690222274007-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 18:11:20,571 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33035,1690222274007-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 18:11:20,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 18:11:20,641 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(139): Connect 0x4e492031 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:20,647 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62a75351, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:20,648 DEBUG [hconnection-0x707a866c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:20,651 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52446, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:20,654 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-24 18:11:20,655 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4e492031 to 127.0.0.1:59012 2023-07-24 18:11:20,655 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:20,656 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase4.apache.org:33035 after: jenkins-hbase4.apache.org:33035 2023-07-24 18:11:20,656 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(139): Connect 0x481b064b to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:20,661 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6cf6a0c2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:20,661 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:20,860 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:11:21,088 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-24 18:11:21,089 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 18:11:21,162 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-24 18:11:23,749 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 18:11:23,749 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 18:11:23,756 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:23,757 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:23,757 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:23,759 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-24 18:11:23,759 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 18:11:23,765 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 18:11:23,767 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58434, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 18:11:23,769 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-24 18:11:23,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 18:11:23,770 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(139): Connect 0x735d0165 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:23,778 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@14d2f552, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:23,778 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:23,782 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:23,783 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101988716b40027 connected 2023-07-24 18:11:23,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:23,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:23,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:23,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:23,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:23,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:23,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:23,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:23,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:23,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:23,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:23,798 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 18:11:23,811 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:23,811 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:23,812 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:23,812 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:23,812 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:23,812 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:23,812 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:23,812 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34431 2023-07-24 18:11:23,813 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:23,815 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:23,815 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:23,816 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:23,817 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34431 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:23,824 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:344310x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:23,826 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(162): regionserver:344310x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:11:23,826 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34431-0x101988716b40028 connected 2023-07-24 18:11:23,827 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(162): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 18:11:23,827 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:23,836 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34431 2023-07-24 18:11:23,836 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34431 2023-07-24 18:11:23,837 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34431 2023-07-24 18:11:23,837 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34431 2023-07-24 18:11:23,837 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34431 2023-07-24 18:11:23,839 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:23,839 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:23,839 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:23,840 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:23,840 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:23,840 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:23,840 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:23,840 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 36947 2023-07-24 18:11:23,841 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:23,847 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:23,847 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4751acc8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:23,847 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:23,847 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6434ed08{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:23,979 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:23,980 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:23,980 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:23,980 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:11:23,981 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:23,982 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1a014d72{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-36947-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3403329263564153728/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:23,983 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@7b371ca1{HTTP/1.1, (http/1.1)}{0.0.0.0:36947} 2023-07-24 18:11:23,983 INFO [Listener at localhost/44627] server.Server(415): Started @52203ms 2023-07-24 18:11:23,985 INFO [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:11:23,986 DEBUG [RS:3;jenkins-hbase4:34431] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:23,987 DEBUG [RS:3;jenkins-hbase4:34431] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:23,987 DEBUG [RS:3;jenkins-hbase4:34431] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:23,989 DEBUG [RS:3;jenkins-hbase4:34431] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:23,993 DEBUG [RS:3;jenkins-hbase4:34431] zookeeper.ReadOnlyZKClient(139): Connect 0x6a4952bc to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:23,996 DEBUG [RS:3;jenkins-hbase4:34431] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@648eb185, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:23,996 DEBUG [RS:3;jenkins-hbase4:34431] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60b139cc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:24,004 DEBUG [RS:3;jenkins-hbase4:34431] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:34431 2023-07-24 18:11:24,004 INFO [RS:3;jenkins-hbase4:34431] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:24,004 INFO [RS:3;jenkins-hbase4:34431] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:24,004 DEBUG [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:24,005 INFO [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33035,1690222274007 with isa=jenkins-hbase4.apache.org/172.31.14.131:34431, startcode=1690222283811 2023-07-24 18:11:24,005 DEBUG [RS:3;jenkins-hbase4:34431] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:24,007 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54749, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.11 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:24,007 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33035] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:24,007 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:24,007 DEBUG [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:11:24,007 DEBUG [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:11:24,007 DEBUG [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40867 2023-07-24 18:11:24,010 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:24,010 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:24,010 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:24,010 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:24,010 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:24,010 DEBUG [RS:3;jenkins-hbase4:34431] zookeeper.ZKUtil(162): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:24,010 WARN [RS:3;jenkins-hbase4:34431] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:24,010 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:11:24,010 INFO [RS:3;jenkins-hbase4:34431] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:24,010 DEBUG [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:24,010 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34431,1690222283811] 2023-07-24 18:11:24,013 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:24,013 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:24,013 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:24,013 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 18:11:24,014 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:24,014 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:24,014 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:24,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:24,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:24,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:24,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:24,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:24,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:24,015 DEBUG [RS:3;jenkins-hbase4:34431] zookeeper.ZKUtil(162): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:24,016 DEBUG [RS:3;jenkins-hbase4:34431] zookeeper.ZKUtil(162): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:24,016 DEBUG [RS:3;jenkins-hbase4:34431] zookeeper.ZKUtil(162): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:24,016 DEBUG [RS:3;jenkins-hbase4:34431] zookeeper.ZKUtil(162): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:24,017 DEBUG [RS:3;jenkins-hbase4:34431] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:24,017 INFO [RS:3;jenkins-hbase4:34431] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:24,022 INFO [RS:3;jenkins-hbase4:34431] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:24,023 INFO [RS:3;jenkins-hbase4:34431] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:24,023 INFO [RS:3;jenkins-hbase4:34431] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:24,023 INFO [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:24,025 INFO [RS:3;jenkins-hbase4:34431] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:24,025 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:24,025 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:24,025 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:24,026 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:24,026 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:24,026 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:24,026 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:24,026 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:24,026 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:24,026 DEBUG [RS:3;jenkins-hbase4:34431] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:24,030 INFO [RS:3;jenkins-hbase4:34431] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:24,030 INFO [RS:3;jenkins-hbase4:34431] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:24,031 INFO [RS:3;jenkins-hbase4:34431] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:24,041 INFO [RS:3;jenkins-hbase4:34431] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:24,042 INFO [RS:3;jenkins-hbase4:34431] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34431,1690222283811-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:24,052 INFO [RS:3;jenkins-hbase4:34431] regionserver.Replication(203): jenkins-hbase4.apache.org,34431,1690222283811 started 2023-07-24 18:11:24,052 INFO [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34431,1690222283811, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34431, sessionid=0x101988716b40028 2023-07-24 18:11:24,052 DEBUG [RS:3;jenkins-hbase4:34431] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:24,052 DEBUG [RS:3;jenkins-hbase4:34431] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:24,052 DEBUG [RS:3;jenkins-hbase4:34431] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34431,1690222283811' 2023-07-24 18:11:24,053 DEBUG [RS:3;jenkins-hbase4:34431] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:24,053 DEBUG [RS:3;jenkins-hbase4:34431] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:24,053 DEBUG [RS:3;jenkins-hbase4:34431] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:24,053 DEBUG [RS:3;jenkins-hbase4:34431] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:24,053 DEBUG [RS:3;jenkins-hbase4:34431] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:24,053 DEBUG [RS:3;jenkins-hbase4:34431] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34431,1690222283811' 2023-07-24 18:11:24,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:24,053 DEBUG [RS:3;jenkins-hbase4:34431] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:24,054 DEBUG [RS:3;jenkins-hbase4:34431] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:24,054 DEBUG [RS:3;jenkins-hbase4:34431] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:24,054 INFO [RS:3;jenkins-hbase4:34431] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:11:24,054 INFO [RS:3;jenkins-hbase4:34431] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:11:24,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:24,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:24,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:24,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:24,063 DEBUG [hconnection-0x46b90dd-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:24,064 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52454, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:24,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:24,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:24,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33035] to rsgroup master 2023-07-24 18:11:24,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:24,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] ipc.CallRunner(144): callId: 25 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:58434 deadline: 1690223484073, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. 2023-07-24 18:11:24,074 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor63.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:24,075 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:24,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:24,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:24,076 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34431, jenkins-hbase4.apache.org:41163, jenkins-hbase4.apache.org:41941, jenkins-hbase4.apache.org:46835], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:24,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:24,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:24,120 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=549 (was 516) Potentially hanging thread: qtp1473520438-1718 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x07b0f986-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1309094686-1789 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x031a14e0-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1072759313-2049 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x5236a4f8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp130877896-1686-acceptor-0@7ae065d-ServerConnector@263442ac{HTTP/1.1, (http/1.1)}{0.0.0.0:40867} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34431 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (449330859) connection to localhost/127.0.0.1:44619 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x735d0165 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp686407465-1780 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x031a14e0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222274988 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: qtp130877896-1687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-4a47d109-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1707344554_17 at /127.0.0.1:45072 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-69171016-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1473520438-1717-acceptor-0@2a92474a-ServerConnector@6b3653b6{HTTP/1.1, (http/1.1)}{0.0.0.0:40541} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (449330859) connection to localhost/127.0.0.1:44619 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41163Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-175331939_17 at /127.0.0.1:50696 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1309094686-1794 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7284848a-metaLookup-shared--pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp188472746-1751 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp686407465-1776 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1072759313-2046 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x6a4952bc-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp188472746-1746 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:44619 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp130877896-1691 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x031a14e0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1473520438-1720 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp188472746-1752 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9-prefix:jenkins-hbase4.apache.org,41163,1690222274180 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x5236a4f8-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1707344554_17 at /127.0.0.1:38726 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100771751_17 at /127.0.0.1:45118 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741898_1074] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100771751_17 at /127.0.0.1:42562 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741896_1072] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1072759313-2051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-156797865_17 at /127.0.0.1:38758 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741897_1073] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1707344554_17 at /127.0.0.1:42544 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741894_1070] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp188472746-1748 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.10@localhost:44619 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:41163-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1309094686-1793 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46835Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:46835-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:41941 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1072759313-2053 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (449330859) connection to localhost/127.0.0.1:44619 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: jenkins-hbase4:34431Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (449330859) connection to localhost/127.0.0.1:44619 from jenkins.hfs.11 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1473520438-1721 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp188472746-1747-acceptor-0@41144515-ServerConnector@2563764c{HTTP/1.1, (http/1.1)}{0.0.0.0:42675} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp686407465-1782 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp686407465-1778 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741897_1073, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1309094686-1791-acceptor-0@52670717-ServerConnector@40c0c4ae{HTTP/1.1, (http/1.1)}{0.0.0.0:42957} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1309094686-1787 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-36d0fa4-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1473520438-1722 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41941Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp188472746-1753 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp130877896-1688 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1072759313-2048 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x735d0165-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741898_1074, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7284848a-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9-prefix:jenkins-hbase4.apache.org,46835,1690222274357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x481b064b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x6a4952bc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1309094686-1788 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp686407465-1779 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:46835 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:41941-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741898_1074, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34431-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp686407465-1783 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1309094686-1792 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x32c5a4cc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9-prefix:jenkins-hbase4.apache.org,41941,1690222274544 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1072759313-2052 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:44619 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1309094686-1790 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:46835-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-156797865_17 at /127.0.0.1:42564 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741897_1073] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1473520438-1723 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741896_1072, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:33035 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp130877896-1689 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741897_1073, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1e16e330-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp130877896-1685 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1072759313-2050 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x5236a4f8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x0e8c6183-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-156797865_17 at /127.0.0.1:45110 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741897_1073] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x07b0f986 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9-prefix:jenkins-hbase4.apache.org,46835,1690222274357.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1473520438-1719 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp130877896-1690 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:44619 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1473520438-1716 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/239877532.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x07b0f986-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x481b064b-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:44619 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x32c5a4cc-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x0e8c6183 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x46b90dd-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1072759313-2047-acceptor-0@63688416-ServerConnector@7b371ca1{HTTP/1.1, (http/1.1)}{0.0.0.0:36947} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x6a4952bc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:44619 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:41163 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36473,1690222261355 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x481b064b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-16-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7284848a-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x32c5a4cc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/25881740.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData-prefix:jenkins-hbase4.apache.org,33035,1690222274007 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46835 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100771751_17 at /127.0.0.1:42566 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741898_1074] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-287563567_17 at /127.0.0.1:38738 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-287563567_17 at /127.0.0.1:45084 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100771751_17 at /127.0.0.1:45098 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741896_1072] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-45403d5-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741896_1072, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp686407465-1777-acceptor-0@5b4da0aa-ServerConnector@2259f828{HTTP/1.1, (http/1.1)}{0.0.0.0:45373} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741894_1070, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp188472746-1749 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741897_1073, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222274995 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100771751_17 at /127.0.0.1:38746 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741896_1072] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33035 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x7284848a-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp130877896-1692 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp686407465-1781 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100771751_17 at /127.0.0.1:38764 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741898_1074] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41941 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41163 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-287563567_17 at /127.0.0.1:42554 [Receiving block BP-938617020-172.31.14.131-1690222233780:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741898_1074, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp188472746-1750 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x0e8c6183-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_117981381_17 at /127.0.0.1:35242 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-938617020-172.31.14.131-1690222233780:blk_1073741896_1072, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:59012@0x735d0165-SendThread(127.0.0.1:59012) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x46b90dd-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34431 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=885 (was 807) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=495 (was 576), ProcessCount=175 (was 177), AvailableMemoryMB=7373 (was 5303) - AvailableMemoryMB LEAK? - 2023-07-24 18:11:24,122 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=549 is superior to 500 2023-07-24 18:11:24,140 INFO [Listener at localhost/44627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=549, OpenFileDescriptor=885, MaxFileDescriptor=60000, SystemLoadAverage=495, ProcessCount=175, AvailableMemoryMB=7371 2023-07-24 18:11:24,140 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=549 is superior to 500 2023-07-24 18:11:24,140 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(132): testClearDeadServers 2023-07-24 18:11:24,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:24,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:24,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:24,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:24,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:24,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:24,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:24,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:24,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:24,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:24,156 INFO [RS:3;jenkins-hbase4:34431] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34431%2C1690222283811, suffix=, logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34431,1690222283811, archiveDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs, maxLogs=32 2023-07-24 18:11:24,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:24,160 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:11:24,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:24,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:24,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:24,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:24,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:24,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:24,171 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK] 2023-07-24 18:11:24,171 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK] 2023-07-24 18:11:24,171 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK] 2023-07-24 18:11:24,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:24,174 INFO [RS:3;jenkins-hbase4:34431] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34431,1690222283811/jenkins-hbase4.apache.org%2C34431%2C1690222283811.1690222284156 2023-07-24 18:11:24,174 DEBUG [RS:3;jenkins-hbase4:34431] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34649,DS-f0eb5118-ccaa-4ee6-84a0-441451d228ef,DISK], DatanodeInfoWithStorage[127.0.0.1:41465,DS-49b7a722-b289-44b5-88fc-5d2eedab311e,DISK], DatanodeInfoWithStorage[127.0.0.1:43241,DS-f06a50aa-49e4-4568-bfd9-a74d73ae8350,DISK]] 2023-07-24 18:11:24,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33035] to rsgroup master 2023-07-24 18:11:24,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:24,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] ipc.CallRunner(144): callId: 53 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:58434 deadline: 1690223484175, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. 2023-07-24 18:11:24,176 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor63.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:24,177 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:24,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:24,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:24,178 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34431, jenkins-hbase4.apache.org:41163, jenkins-hbase4.apache.org:41941, jenkins-hbase4.apache.org:46835], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:24,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:24,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:24,179 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBasics(214): testClearDeadServers 2023-07-24 18:11:24,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:24,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:24,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testClearDeadServers_1696124672 2023-07-24 18:11:24,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:24,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1696124672 2023-07-24 18:11:24,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:24,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:11:24,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:24,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:24,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:24,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41941, jenkins-hbase4.apache.org:41163, jenkins-hbase4.apache.org:34431] to rsgroup Group_testClearDeadServers_1696124672 2023-07-24 18:11:24,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:24,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1696124672 2023-07-24 18:11:24,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:24,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:11:24,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(238): Moving server region b3e0fb36cbe9750f5f2b47d078547932, which do not belong to RSGroup Group_testClearDeadServers_1696124672 2023-07-24 18:11:24,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] procedure2.ProcedureExecutor(1029): Stored pid=139, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE 2023-07-24 18:11:24,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 18:11:24,198 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE 2023-07-24 18:11:24,198 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:24,199 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222284198"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222284198"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222284198"}]},"ts":"1690222284198"} 2023-07-24 18:11:24,203 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=139, state=RUNNABLE; CloseRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,41941,1690222274544}] 2023-07-24 18:11:24,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:24,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3e0fb36cbe9750f5f2b47d078547932, disabling compactions & flushes 2023-07-24 18:11:24,357 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:24,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:24,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. after waiting 0 ms 2023-07-24 18:11:24,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:24,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/recovered.edits/26.seqid, newMaxSeqId=26, maxSeqId=23 2023-07-24 18:11:24,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:24,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:11:24,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b3e0fb36cbe9750f5f2b47d078547932 move to jenkins-hbase4.apache.org,46835,1690222274357 record at close sequenceid=24 2023-07-24 18:11:24,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:24,369 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=CLOSED 2023-07-24 18:11:24,369 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222284369"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222284369"}]},"ts":"1690222284369"} 2023-07-24 18:11:24,371 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=139 2023-07-24 18:11:24,371 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=139, state=SUCCESS; CloseRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,41941,1690222274544 in 170 msec 2023-07-24 18:11:24,372 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=139, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46835,1690222274357; forceNewPlan=false, retain=false 2023-07-24 18:11:24,522 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:24,523 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222284522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222284522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222284522"}]},"ts":"1690222284522"} 2023-07-24 18:11:24,524 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=139, state=RUNNABLE; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,46835,1690222274357}] 2023-07-24 18:11:24,678 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:24,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3e0fb36cbe9750f5f2b47d078547932, NAME => 'hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:24,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:24,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:24,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:24,679 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:24,680 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:24,681 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:11:24,681 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info 2023-07-24 18:11:24,682 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3e0fb36cbe9750f5f2b47d078547932 columnFamilyName info 2023-07-24 18:11:24,688 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:11:24,688 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/2628731f4d1b461e985c85e3adc2b46f 2023-07-24 18:11:24,693 DEBUG [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(539): loaded hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/info/99881a762fd443059bf23593fedbb752 2023-07-24 18:11:24,693 INFO [StoreOpener-b3e0fb36cbe9750f5f2b47d078547932-1] regionserver.HStore(310): Store=b3e0fb36cbe9750f5f2b47d078547932/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:24,694 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:24,695 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:24,698 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:24,699 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3e0fb36cbe9750f5f2b47d078547932; next sequenceid=27; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10226690560, jitterRate=-0.04756522178649902}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:24,699 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:11:24,700 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932., pid=141, masterSystemTime=1690222284675 2023-07-24 18:11:24,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:24,702 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:24,704 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=b3e0fb36cbe9750f5f2b47d078547932, regionState=OPEN, openSeqNum=27, regionLocation=jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:24,704 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222284704"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222284704"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222284704"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222284704"}]},"ts":"1690222284704"} 2023-07-24 18:11:24,708 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=139 2023-07-24 18:11:24,708 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=139, state=SUCCESS; OpenRegionProcedure b3e0fb36cbe9750f5f2b47d078547932, server=jenkins-hbase4.apache.org,46835,1690222274357 in 182 msec 2023-07-24 18:11:24,710 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=139, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=b3e0fb36cbe9750f5f2b47d078547932, REOPEN/MOVE in 511 msec 2023-07-24 18:11:25,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] procedure.ProcedureSyncWait(216): waitFor pid=139 2023-07-24 18:11:25,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34431,1690222283811, jenkins-hbase4.apache.org,41163,1690222274180, jenkins-hbase4.apache.org,41941,1690222274544] are moved back to default 2023-07-24 18:11:25,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testClearDeadServers_1696124672 2023-07-24 18:11:25,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:25,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:25,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:25,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1696124672 2023-07-24 18:11:25,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:25,204 DEBUG [Listener at localhost/44627] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:11:25,205 INFO [RS-EventLoopGroup-17-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48280, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:11:25,206 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34431] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34431,1690222283811' ***** 2023-07-24 18:11:25,206 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34431] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x22a569f0 2023-07-24 18:11:25,206 INFO [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:25,209 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:25,209 INFO [RS:3;jenkins-hbase4:34431] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1a014d72{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:25,210 INFO [RS:3;jenkins-hbase4:34431] server.AbstractConnector(383): Stopped ServerConnector@7b371ca1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:25,210 INFO [RS:3;jenkins-hbase4:34431] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:25,211 INFO [RS:3;jenkins-hbase4:34431] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6434ed08{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:25,211 INFO [RS:3;jenkins-hbase4:34431] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4751acc8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:25,212 INFO [RS:3;jenkins-hbase4:34431] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:25,212 INFO [RS:3;jenkins-hbase4:34431] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:25,212 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:25,212 INFO [RS:3;jenkins-hbase4:34431] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:25,212 INFO [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:25,213 DEBUG [RS:3;jenkins-hbase4:34431] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6a4952bc to 127.0.0.1:59012 2023-07-24 18:11:25,213 DEBUG [RS:3;jenkins-hbase4:34431] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,213 INFO [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34431,1690222283811; all regions closed. 2023-07-24 18:11:25,219 DEBUG [RS:3;jenkins-hbase4:34431] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:11:25,220 INFO [RS:3;jenkins-hbase4:34431] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34431%2C1690222283811:(num 1690222284156) 2023-07-24 18:11:25,220 DEBUG [RS:3;jenkins-hbase4:34431] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,220 INFO [RS:3;jenkins-hbase4:34431] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:25,220 INFO [RS:3;jenkins-hbase4:34431] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:25,220 INFO [RS:3;jenkins-hbase4:34431] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:25,220 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:25,220 INFO [RS:3;jenkins-hbase4:34431] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:25,220 INFO [RS:3;jenkins-hbase4:34431] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:25,221 INFO [RS:3;jenkins-hbase4:34431] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34431 2023-07-24 18:11:25,223 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:25,223 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:25,223 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,223 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,223 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:25,223 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 2023-07-24 18:11:25,223 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,223 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,223 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,225 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,226 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34431,1690222283811] 2023-07-24 18:11:25,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,226 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34431,1690222283811; numProcessing=1 2023-07-24 18:11:25,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,226 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,226 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 znode expired, triggering replicatorRemoved event 2023-07-24 18:11:25,227 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34431,1690222283811 already deleted, retry=false 2023-07-24 18:11:25,227 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,227 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,34431,1690222283811 on jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:25,227 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 znode expired, triggering replicatorRemoved event 2023-07-24 18:11:25,227 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,227 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,227 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,34431,1690222283811 znode expired, triggering replicatorRemoved event 2023-07-24 18:11:25,228 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,228 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=142, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,34431,1690222283811, splitWal=true, meta=false 2023-07-24 18:11:25,228 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,228 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=142 for jenkins-hbase4.apache.org,34431,1690222283811 (carryingMeta=false) jenkins-hbase4.apache.org,34431,1690222283811/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@51230bde[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-24 18:11:25,229 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,229 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:25,229 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,230 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,230 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,230 INFO [PEWorker-4] procedure.ServerCrashProcedure(161): Start pid=142, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,34431,1690222283811, splitWal=true, meta=false 2023-07-24 18:11:25,230 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,230 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,231 INFO [PEWorker-4] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,34431,1690222283811 had 0 regions 2023-07-24 18:11:25,232 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:25,232 INFO [PEWorker-4] procedure.ServerCrashProcedure(300): Splitting WALs pid=142, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,34431,1690222283811, splitWal=true, meta=false, isMeta: false 2023-07-24 18:11:25,232 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:25,233 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1696124672 2023-07-24 18:11:25,233 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:25,233 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:11:25,233 DEBUG [PEWorker-4] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34431,1690222283811-splitting 2023-07-24 18:11:25,234 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34431,1690222283811-splitting dir is empty, no logs to split. 2023-07-24 18:11:25,234 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase4.apache.org,34431,1690222283811 WAL count=0, meta=false 2023-07-24 18:11:25,234 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 18:11:25,236 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34431,1690222283811-splitting dir is empty, no logs to split. 2023-07-24 18:11:25,236 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase4.apache.org,34431,1690222283811 WAL count=0, meta=false 2023-07-24 18:11:25,236 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,34431,1690222283811 WAL splitting is done? wals=0, meta=false 2023-07-24 18:11:25,238 INFO [PEWorker-4] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,34431,1690222283811 failed, ignore...File hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,34431,1690222283811-splitting does not exist. 2023-07-24 18:11:25,239 INFO [PEWorker-4] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,34431,1690222283811 after splitting done 2023-07-24 18:11:25,239 DEBUG [PEWorker-4] master.DeadServer(114): Removed jenkins-hbase4.apache.org,34431,1690222283811 from processing; numProcessing=0 2023-07-24 18:11:25,240 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,34431,1690222283811, splitWal=true, meta=false in 12 msec 2023-07-24 18:11:25,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(2362): Client=jenkins//172.31.14.131 clear dead region servers. 2023-07-24 18:11:25,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:25,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1696124672 2023-07-24 18:11:25,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:25,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:11:25,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(609): Remove decommissioned servers [jenkins-hbase4.apache.org:34431] from RSGroup done 2023-07-24 18:11:25,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1696124672 2023-07-24 18:11:25,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:25,328 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41941] ipc.CallRunner(144): callId: 67 service: ClientService methodName: Scan size: 146 connection: 172.31.14.131:55572 deadline: 1690222345328, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46835 startCode=1690222274357. As of locationSeqNum=24. 2023-07-24 18:11:25,414 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:25,414 INFO [RS:3;jenkins-hbase4:34431] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34431,1690222283811; zookeeper connection closed. 2023-07-24 18:11:25,414 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:34431-0x101988716b40028, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:25,415 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@166e5ac9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@166e5ac9 2023-07-24 18:11:25,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:25,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:25,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:25,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:25,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:25,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41941, jenkins-hbase4.apache.org:41163] to rsgroup default 2023-07-24 18:11:25,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:25,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1696124672 2023-07-24 18:11:25,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:25,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:11:25,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testClearDeadServers_1696124672, current retry=0 2023-07-24 18:11:25,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41163,1690222274180, jenkins-hbase4.apache.org,41941,1690222274544] are moved back to Group_testClearDeadServers_1696124672 2023-07-24 18:11:25,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testClearDeadServers_1696124672 => default 2023-07-24 18:11:25,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:25,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testClearDeadServers_1696124672 2023-07-24 18:11:25,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:25,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:25,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:11:25,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:25,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:25,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:25,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:25,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:25,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:25,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:25,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:25,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:25,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:25,460 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 18:11:25,472 INFO [Listener at localhost/44627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:25,472 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:25,472 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:25,472 INFO [Listener at localhost/44627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:25,472 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:25,472 INFO [Listener at localhost/44627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:25,472 INFO [Listener at localhost/44627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:25,474 INFO [Listener at localhost/44627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40813 2023-07-24 18:11:25,474 INFO [Listener at localhost/44627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:25,476 DEBUG [Listener at localhost/44627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:25,476 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:25,477 INFO [Listener at localhost/44627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:25,478 INFO [Listener at localhost/44627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40813 connecting to ZooKeeper ensemble=127.0.0.1:59012 2023-07-24 18:11:25,481 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:408130x0, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:25,482 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(162): regionserver:408130x0, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:11:25,483 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40813-0x101988716b4002a connected 2023-07-24 18:11:25,484 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(162): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 18:11:25,484 DEBUG [Listener at localhost/44627] zookeeper.ZKUtil(164): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:25,485 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40813 2023-07-24 18:11:25,485 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40813 2023-07-24 18:11:25,485 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40813 2023-07-24 18:11:25,485 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40813 2023-07-24 18:11:25,485 DEBUG [Listener at localhost/44627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40813 2023-07-24 18:11:25,487 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:25,487 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:25,487 INFO [Listener at localhost/44627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:25,488 INFO [Listener at localhost/44627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:25,488 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:25,488 INFO [Listener at localhost/44627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:25,488 INFO [Listener at localhost/44627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:25,489 INFO [Listener at localhost/44627] http.HttpServer(1146): Jetty bound to port 37771 2023-07-24 18:11:25,489 INFO [Listener at localhost/44627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:25,492 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:25,492 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@711e9f2d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:25,493 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:25,493 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@286576b9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:25,610 INFO [Listener at localhost/44627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:25,611 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:25,612 INFO [Listener at localhost/44627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:25,612 INFO [Listener at localhost/44627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:11:25,613 INFO [Listener at localhost/44627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:25,614 INFO [Listener at localhost/44627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5432e201{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/java.io.tmpdir/jetty-0_0_0_0-37771-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3914006856633863770/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:25,616 INFO [Listener at localhost/44627] server.AbstractConnector(333): Started ServerConnector@5fa3146e{HTTP/1.1, (http/1.1)}{0.0.0.0:37771} 2023-07-24 18:11:25,616 INFO [Listener at localhost/44627] server.Server(415): Started @53836ms 2023-07-24 18:11:25,619 INFO [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(951): ClusterId : c1c1d27a-de9f-4d59-a5d6-234fda91a21c 2023-07-24 18:11:25,619 DEBUG [RS:4;jenkins-hbase4:40813] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:25,621 DEBUG [RS:4;jenkins-hbase4:40813] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:25,621 DEBUG [RS:4;jenkins-hbase4:40813] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:25,623 DEBUG [RS:4;jenkins-hbase4:40813] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:25,624 DEBUG [RS:4;jenkins-hbase4:40813] zookeeper.ReadOnlyZKClient(139): Connect 0x7f959c12 to 127.0.0.1:59012 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:25,627 DEBUG [RS:4;jenkins-hbase4:40813] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71526d90, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:25,627 DEBUG [RS:4;jenkins-hbase4:40813] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@756df762, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:25,635 DEBUG [RS:4;jenkins-hbase4:40813] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase4:40813 2023-07-24 18:11:25,636 INFO [RS:4;jenkins-hbase4:40813] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:25,636 INFO [RS:4;jenkins-hbase4:40813] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:25,636 DEBUG [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:25,636 INFO [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33035,1690222274007 with isa=jenkins-hbase4.apache.org/172.31.14.131:40813, startcode=1690222285471 2023-07-24 18:11:25,636 DEBUG [RS:4;jenkins-hbase4:40813] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:25,638 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35537, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.12 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:25,638 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33035] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,638 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:25,639 DEBUG [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9 2023-07-24 18:11:25,639 DEBUG [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44619 2023-07-24 18:11:25,639 DEBUG [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=40867 2023-07-24 18:11:25,641 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,641 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,641 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,641 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,641 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:25,641 DEBUG [RS:4;jenkins-hbase4:40813] zookeeper.ZKUtil(162): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,641 WARN [RS:4;jenkins-hbase4:40813] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:25,642 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40813,1690222285471] 2023-07-24 18:11:25,642 INFO [RS:4;jenkins-hbase4:40813] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:25,642 DEBUG [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,642 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:11:25,642 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,642 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,642 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,644 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,644 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,644 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,644 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33035,1690222274007] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 18:11:25,644 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,646 DEBUG [RS:4;jenkins-hbase4:40813] zookeeper.ZKUtil(162): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,646 DEBUG [RS:4;jenkins-hbase4:40813] zookeeper.ZKUtil(162): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,647 DEBUG [RS:4;jenkins-hbase4:40813] zookeeper.ZKUtil(162): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,647 DEBUG [RS:4;jenkins-hbase4:40813] zookeeper.ZKUtil(162): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,648 DEBUG [RS:4;jenkins-hbase4:40813] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:25,648 INFO [RS:4;jenkins-hbase4:40813] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:25,649 INFO [RS:4;jenkins-hbase4:40813] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:25,649 INFO [RS:4;jenkins-hbase4:40813] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:25,649 INFO [RS:4;jenkins-hbase4:40813] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:25,649 INFO [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:25,651 INFO [RS:4;jenkins-hbase4:40813] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:25,651 DEBUG [RS:4;jenkins-hbase4:40813] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:25,652 INFO [RS:4;jenkins-hbase4:40813] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:25,652 INFO [RS:4;jenkins-hbase4:40813] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:25,652 INFO [RS:4;jenkins-hbase4:40813] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:25,663 INFO [RS:4;jenkins-hbase4:40813] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:25,663 INFO [RS:4;jenkins-hbase4:40813] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40813,1690222285471-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:25,675 INFO [RS:4;jenkins-hbase4:40813] regionserver.Replication(203): jenkins-hbase4.apache.org,40813,1690222285471 started 2023-07-24 18:11:25,675 INFO [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40813,1690222285471, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40813, sessionid=0x101988716b4002a 2023-07-24 18:11:25,675 DEBUG [RS:4;jenkins-hbase4:40813] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:25,675 DEBUG [RS:4;jenkins-hbase4:40813] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,675 DEBUG [RS:4;jenkins-hbase4:40813] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40813,1690222285471' 2023-07-24 18:11:25,675 DEBUG [RS:4;jenkins-hbase4:40813] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:25,676 DEBUG [RS:4;jenkins-hbase4:40813] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:25,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:25,677 DEBUG [RS:4;jenkins-hbase4:40813] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:25,677 DEBUG [RS:4;jenkins-hbase4:40813] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:25,677 DEBUG [RS:4;jenkins-hbase4:40813] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,677 DEBUG [RS:4;jenkins-hbase4:40813] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40813,1690222285471' 2023-07-24 18:11:25,677 DEBUG [RS:4;jenkins-hbase4:40813] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:25,677 DEBUG [RS:4;jenkins-hbase4:40813] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:25,678 DEBUG [RS:4;jenkins-hbase4:40813] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:25,678 INFO [RS:4;jenkins-hbase4:40813] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:11:25,678 INFO [RS:4;jenkins-hbase4:40813] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:11:25,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:25,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:25,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:25,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:25,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:25,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:25,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33035] to rsgroup master 2023-07-24 18:11:25,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:25,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] ipc.CallRunner(144): callId: 104 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:58434 deadline: 1690223485687, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. 2023-07-24 18:11:25,688 WARN [Listener at localhost/44627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor63.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33035 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:25,691 INFO [Listener at localhost/44627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:25,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:25,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:25,692 INFO [Listener at localhost/44627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:40813, jenkins-hbase4.apache.org:41163, jenkins-hbase4.apache.org:41941, jenkins-hbase4.apache.org:46835], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:25,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:25,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33035] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:25,713 INFO [Listener at localhost/44627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=563 (was 549) - Thread LEAK? -, OpenFileDescriptor=859 (was 885), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=495 (was 495), ProcessCount=175 (was 175), AvailableMemoryMB=7360 (was 7371) 2023-07-24 18:11:25,713 WARN [Listener at localhost/44627] hbase.ResourceChecker(130): Thread=563 is superior to 500 2023-07-24 18:11:25,713 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 18:11:25,713 INFO [Listener at localhost/44627] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 18:11:25,713 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x481b064b to 127.0.0.1:59012 2023-07-24 18:11:25,713 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,713 DEBUG [Listener at localhost/44627] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 18:11:25,713 DEBUG [Listener at localhost/44627] util.JVMClusterUtil(257): Found active master hash=1557603216, stopped=false 2023-07-24 18:11:25,713 DEBUG [Listener at localhost/44627] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:11:25,714 DEBUG [Listener at localhost/44627] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:11:25,714 INFO [Listener at localhost/44627] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:25,716 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:25,716 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:25,717 INFO [Listener at localhost/44627] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 18:11:25,716 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:25,716 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:25,716 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:25,718 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:25,718 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:25,718 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:25,718 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:25,719 DEBUG [Listener at localhost/44627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5236a4f8 to 127.0.0.1:59012 2023-07-24 18:11:25,719 DEBUG [Listener at localhost/44627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,719 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:25,719 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41163,1690222274180' ***** 2023-07-24 18:11:25,719 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:25,719 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46835,1690222274357' ***** 2023-07-24 18:11:25,719 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:25,719 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41941,1690222274544' ***** 2023-07-24 18:11:25,719 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:25,719 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41163,1690222274180' ***** 2023-07-24 18:11:25,719 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(2311): STOPPED: Exiting; cluster shutdown set and not carrying any regions 2023-07-24 18:11:25,719 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:25,720 INFO [Listener at localhost/44627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40813,1690222285471' ***** 2023-07-24 18:11:25,720 INFO [Listener at localhost/44627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:25,720 INFO [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:25,722 INFO [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:25,723 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:25,722 INFO [RS:1;jenkins-hbase4:46835] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@77a8a8e7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:25,721 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:25,727 INFO [RS:1;jenkins-hbase4:46835] server.AbstractConnector(383): Stopped ServerConnector@2563764c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:25,727 INFO [RS:1;jenkins-hbase4:46835] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:25,727 INFO [RS:2;jenkins-hbase4:41941] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1f8f1476{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:25,728 INFO [RS:2;jenkins-hbase4:41941] server.AbstractConnector(383): Stopped ServerConnector@2259f828{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:25,728 INFO [RS:2;jenkins-hbase4:41941] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:25,728 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:25,728 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:25,731 INFO [RS:1;jenkins-hbase4:46835] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@215674ac{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:25,731 INFO [RS:2;jenkins-hbase4:41941] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7bdf23d8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:25,732 INFO [RS:1;jenkins-hbase4:46835] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3a32e641{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:25,733 INFO [RS:2;jenkins-hbase4:41941] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@48ac8439{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:25,733 INFO [RS:0;jenkins-hbase4:41163] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5113828{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:25,734 INFO [RS:1;jenkins-hbase4:46835] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:25,735 INFO [RS:2;jenkins-hbase4:41941] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:25,735 INFO [RS:0;jenkins-hbase4:41163] server.AbstractConnector(383): Stopped ServerConnector@6b3653b6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:25,735 INFO [RS:1;jenkins-hbase4:46835] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:25,735 INFO [RS:2;jenkins-hbase4:41941] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:25,735 INFO [RS:4;jenkins-hbase4:40813] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5432e201{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:25,735 INFO [RS:2;jenkins-hbase4:41941] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:25,735 INFO [RS:1;jenkins-hbase4:46835] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:25,735 INFO [RS:0;jenkins-hbase4:41163] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:25,735 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(3305): Received CLOSE for b3e0fb36cbe9750f5f2b47d078547932 2023-07-24 18:11:25,735 INFO [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,736 INFO [RS:4;jenkins-hbase4:40813] server.AbstractConnector(383): Stopped ServerConnector@5fa3146e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:25,736 INFO [RS:0;jenkins-hbase4:41163] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1f201ec0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:25,737 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(3305): Received CLOSE for f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:25,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3e0fb36cbe9750f5f2b47d078547932, disabling compactions & flushes 2023-07-24 18:11:25,737 INFO [RS:4;jenkins-hbase4:40813] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:25,736 DEBUG [RS:2;jenkins-hbase4:41941] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0e8c6183 to 127.0.0.1:59012 2023-07-24 18:11:25,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:25,738 INFO [RS:4;jenkins-hbase4:40813] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@286576b9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:25,737 INFO [RS:0;jenkins-hbase4:41163] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5d1a92db{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:25,737 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(3305): Received CLOSE for 785da8c92abeb2f759b91756349c6ee1 2023-07-24 18:11:25,739 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:25,739 DEBUG [RS:1;jenkins-hbase4:46835] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x32c5a4cc to 127.0.0.1:59012 2023-07-24 18:11:25,739 DEBUG [RS:1;jenkins-hbase4:46835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,740 INFO [RS:1;jenkins-hbase4:46835] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:25,740 INFO [RS:1;jenkins-hbase4:46835] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:25,740 INFO [RS:1;jenkins-hbase4:46835] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:25,739 INFO [RS:4;jenkins-hbase4:40813] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@711e9f2d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:25,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:25,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. after waiting 0 ms 2023-07-24 18:11:25,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:25,740 INFO [RS:0;jenkins-hbase4:41163] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:25,740 INFO [RS:0;jenkins-hbase4:41163] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:25,740 INFO [RS:4;jenkins-hbase4:40813] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:25,740 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:25,740 INFO [RS:4;jenkins-hbase4:40813] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:25,740 INFO [RS:4;jenkins-hbase4:40813] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:25,738 DEBUG [RS:2;jenkins-hbase4:41941] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,740 INFO [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,740 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:25,740 INFO [RS:0;jenkins-hbase4:41163] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:25,740 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 18:11:25,741 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,741 DEBUG [RS:4;jenkins-hbase4:40813] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7f959c12 to 127.0.0.1:59012 2023-07-24 18:11:25,741 INFO [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41941,1690222274544; all regions closed. 2023-07-24 18:11:25,741 DEBUG [RS:4;jenkins-hbase4:40813] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,741 DEBUG [RS:0;jenkins-hbase4:41163] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x07b0f986 to 127.0.0.1:59012 2023-07-24 18:11:25,741 DEBUG [RS:0;jenkins-hbase4:41163] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,741 INFO [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40813,1690222285471; all regions closed. 2023-07-24 18:11:25,741 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41163,1690222274180; all regions closed. 2023-07-24 18:11:25,741 DEBUG [RS:4;jenkins-hbase4:40813] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,741 INFO [RS:4;jenkins-hbase4:40813] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:25,741 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-24 18:11:25,742 DEBUG [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1478): Online Regions={b3e0fb36cbe9750f5f2b47d078547932=hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932., f93db382913b37f9661cac1fd8ee01a9=hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9., 1588230740=hbase:meta,,1.1588230740, 785da8c92abeb2f759b91756349c6ee1=hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1.} 2023-07-24 18:11:25,742 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:11:25,742 DEBUG [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1504): Waiting on 1588230740, 785da8c92abeb2f759b91756349c6ee1, b3e0fb36cbe9750f5f2b47d078547932, f93db382913b37f9661cac1fd8ee01a9 2023-07-24 18:11:25,742 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:11:25,742 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:11:25,742 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:11:25,742 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:11:25,742 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.28 KB heapSize=7.76 KB 2023-07-24 18:11:25,742 INFO [RS:4;jenkins-hbase4:40813] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:25,742 INFO [RS:4;jenkins-hbase4:40813] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:25,742 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:25,742 INFO [RS:4;jenkins-hbase4:40813] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:25,744 INFO [RS:4;jenkins-hbase4:40813] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:25,746 INFO [RS:4;jenkins-hbase4:40813] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40813 2023-07-24 18:11:25,751 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:25,751 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:25,753 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:25,754 DEBUG [RS:2;jenkins-hbase4:41941] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:11:25,754 INFO [RS:2;jenkins-hbase4:41941] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41941%2C1690222274544:(num 1690222275259) 2023-07-24 18:11:25,754 DEBUG [RS:2;jenkins-hbase4:41941] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,754 INFO [RS:2;jenkins-hbase4:41941] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:25,756 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:25,760 INFO [RS:2;jenkins-hbase4:41941] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:25,760 INFO [RS:2;jenkins-hbase4:41941] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:25,760 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:25,760 INFO [RS:2;jenkins-hbase4:41941] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:25,760 INFO [RS:2;jenkins-hbase4:41941] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:25,761 INFO [RS:2;jenkins-hbase4:41941] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41941 2023-07-24 18:11:25,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/namespace/b3e0fb36cbe9750f5f2b47d078547932/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=26 2023-07-24 18:11:25,762 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:25,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3e0fb36cbe9750f5f2b47d078547932: 2023-07-24 18:11:25,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690222242283.b3e0fb36cbe9750f5f2b47d078547932. 2023-07-24 18:11:25,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f93db382913b37f9661cac1fd8ee01a9, disabling compactions & flushes 2023-07-24 18:11:25,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:25,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:25,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. after waiting 0 ms 2023-07-24 18:11:25,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:25,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f93db382913b37f9661cac1fd8ee01a9 1/1 column families, dataSize=4.27 KB heapSize=7.02 KB 2023-07-24 18:11:25,772 DEBUG [RS:0;jenkins-hbase4:41163] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:11:25,772 INFO [RS:0;jenkins-hbase4:41163] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41163%2C1690222274180:(num 1690222275274) 2023-07-24 18:11:25,772 DEBUG [RS:0;jenkins-hbase4:41163] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:25,772 INFO [RS:0;jenkins-hbase4:41163] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:25,772 INFO [RS:0;jenkins-hbase4:41163] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:25,772 INFO [RS:0;jenkins-hbase4:41163] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:25,772 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:25,772 INFO [RS:0;jenkins-hbase4:41163] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:25,773 INFO [RS:0;jenkins-hbase4:41163] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:25,779 INFO [RS:0;jenkins-hbase4:41163] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41163 2023-07-24 18:11:25,787 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.28 KB at sequenceid=181 (bloomFilter=false), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/info/00d5594e692c4805868f5d4701e617e3 2023-07-24 18:11:25,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.27 KB at sequenceid=102 (bloomFilter=true), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp/m/70736c728e6d4010a1c8321656ab206c 2023-07-24 18:11:25,794 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/.tmp/info/00d5594e692c4805868f5d4701e617e3 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/00d5594e692c4805868f5d4701e617e3 2023-07-24 18:11:25,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 70736c728e6d4010a1c8321656ab206c 2023-07-24 18:11:25,800 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/00d5594e692c4805868f5d4701e617e3, entries=31, sequenceid=181, filesize=8.3 K 2023-07-24 18:11:25,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/.tmp/m/70736c728e6d4010a1c8321656ab206c as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/70736c728e6d4010a1c8321656ab206c 2023-07-24 18:11:25,801 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.28 KB/4384, heapSize ~7.24 KB/7416, currentSize=0 B/0 for 1588230740 in 59ms, sequenceid=181, compaction requested=false 2023-07-24 18:11:25,806 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/b7efcf27a4234e8cb81fe70d74c707cd, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/f7f4dbb0133a4183b89b4fe6e9566541, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/a79e17af74e44f32952a7d071379d76d] to archive 2023-07-24 18:11:25,807 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-24 18:11:25,809 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/b7efcf27a4234e8cb81fe70d74c707cd to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/hbase/meta/1588230740/info/b7efcf27a4234e8cb81fe70d74c707cd 2023-07-24 18:11:25,811 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/f7f4dbb0133a4183b89b4fe6e9566541 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/hbase/meta/1588230740/info/f7f4dbb0133a4183b89b4fe6e9566541 2023-07-24 18:11:25,813 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/info/a79e17af74e44f32952a7d071379d76d to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/hbase/meta/1588230740/info/a79e17af74e44f32952a7d071379d76d 2023-07-24 18:11:25,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 70736c728e6d4010a1c8321656ab206c 2023-07-24 18:11:25,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/70736c728e6d4010a1c8321656ab206c, entries=6, sequenceid=102, filesize=5.4 K 2023-07-24 18:11:25,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.27 KB/4376, heapSize ~7.01 KB/7176, currentSize=0 B/0 for f93db382913b37f9661cac1fd8ee01a9 in 55ms, sequenceid=102, compaction requested=false 2023-07-24 18:11:25,825 DEBUG [StoreCloser-hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d5cd966a907b4e6e86b91fb7d6889add, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/c5e564844d934f86b57f8f0aadc04422, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d08a5ba50b5c4cb6b3b0378bbcc621b6] to archive 2023-07-24 18:11:25,826 DEBUG [StoreCloser-hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-24 18:11:25,828 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,828 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,828 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,828 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,828 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,828 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,828 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,829 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40813,1690222285471 2023-07-24 18:11:25,829 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:25,829 DEBUG [StoreCloser-hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d5cd966a907b4e6e86b91fb7d6889add to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d5cd966a907b4e6e86b91fb7d6889add 2023-07-24 18:11:25,830 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,830 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,830 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,830 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,830 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,831 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,831 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41941,1690222274544 2023-07-24 18:11:25,831 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41163,1690222274180 2023-07-24 18:11:25,831 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40813,1690222285471] 2023-07-24 18:11:25,831 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40813,1690222285471; numProcessing=1 2023-07-24 18:11:25,838 DEBUG [StoreCloser-hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/c5e564844d934f86b57f8f0aadc04422 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/c5e564844d934f86b57f8f0aadc04422 2023-07-24 18:11:25,839 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40813,1690222285471 already deleted, retry=false 2023-07-24 18:11:25,839 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40813,1690222285471 expired; onlineServers=3 2023-07-24 18:11:25,839 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41941,1690222274544] 2023-07-24 18:11:25,839 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41941,1690222274544; numProcessing=2 2023-07-24 18:11:25,842 DEBUG [StoreCloser-hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d08a5ba50b5c4cb6b3b0378bbcc621b6 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/m/d08a5ba50b5c4cb6b3b0378bbcc621b6 2023-07-24 18:11:25,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/rsgroup/f93db382913b37f9661cac1fd8ee01a9/recovered.edits/105.seqid, newMaxSeqId=105, maxSeqId=83 2023-07-24 18:11:25,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:11:25,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:25,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f93db382913b37f9661cac1fd8ee01a9: 2023-07-24 18:11:25,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690222242536.f93db382913b37f9661cac1fd8ee01a9. 2023-07-24 18:11:25,870 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 785da8c92abeb2f759b91756349c6ee1, disabling compactions & flushes 2023-07-24 18:11:25,871 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:25,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:25,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. after waiting 0 ms 2023-07-24 18:11:25,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:25,875 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/ed4eee4aebd4497b91a21f8f303e8b08, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/fde3e8b12951484eaef87586119cf207, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/230c7749dda64496b1ef6916ca5f4650] to archive 2023-07-24 18:11:25,876 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-24 18:11:25,878 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/ed4eee4aebd4497b91a21f8f303e8b08 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/hbase/meta/1588230740/table/ed4eee4aebd4497b91a21f8f303e8b08 2023-07-24 18:11:25,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/quota/785da8c92abeb2f759b91756349c6ee1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:11:25,880 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:25,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 785da8c92abeb2f759b91756349c6ee1: 2023-07-24 18:11:25,880 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/fde3e8b12951484eaef87586119cf207 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/hbase/meta/1588230740/table/fde3e8b12951484eaef87586119cf207 2023-07-24 18:11:25,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690222267872.785da8c92abeb2f759b91756349c6ee1. 2023-07-24 18:11:25,881 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/table/230c7749dda64496b1ef6916ca5f4650 to hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/archive/data/hbase/meta/1588230740/table/230c7749dda64496b1ef6916ca5f4650 2023-07-24 18:11:25,890 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/data/hbase/meta/1588230740/recovered.edits/184.seqid, newMaxSeqId=184, maxSeqId=166 2023-07-24 18:11:25,891 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:11:25,892 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:11:25,892 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:11:25,892 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 18:11:25,939 INFO [RS:4;jenkins-hbase4:40813] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40813,1690222285471; zookeeper connection closed. 2023-07-24 18:11:25,939 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:25,939 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:40813-0x101988716b4002a, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:25,939 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@e0b4dcb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@e0b4dcb 2023-07-24 18:11:25,940 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41941,1690222274544 already deleted, retry=false 2023-07-24 18:11:25,940 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41941,1690222274544 expired; onlineServers=2 2023-07-24 18:11:25,940 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41163,1690222274180] 2023-07-24 18:11:25,940 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41163,1690222274180; numProcessing=3 2023-07-24 18:11:25,942 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46835,1690222274357; all regions closed. 2023-07-24 18:11:25,946 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/WALs/jenkins-hbase4.apache.org,46835,1690222274357/jenkins-hbase4.apache.org%2C46835%2C1690222274357.meta.1690222275328.meta not finished, retry = 0 2023-07-24 18:11:26,031 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:26,031 INFO [RS:2;jenkins-hbase4:41941] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41941,1690222274544; zookeeper connection closed. 2023-07-24 18:11:26,031 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41941-0x101988716b4001f, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:26,032 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3817d3e6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3817d3e6 2023-07-24 18:11:26,039 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:26,039 INFO [RS:0;jenkins-hbase4:41163] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41163,1690222274180; zookeeper connection closed. 2023-07-24 18:11:26,039 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:41163-0x101988716b4001d, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:26,039 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@20e55eef] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@20e55eef 2023-07-24 18:11:26,041 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41163,1690222274180 already deleted, retry=false 2023-07-24 18:11:26,041 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41163,1690222274180 expired; onlineServers=1 2023-07-24 18:11:26,049 DEBUG [RS:1;jenkins-hbase4:46835] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:11:26,049 INFO [RS:1;jenkins-hbase4:46835] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46835%2C1690222274357.meta:.meta(num 1690222275328) 2023-07-24 18:11:26,056 DEBUG [RS:1;jenkins-hbase4:46835] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/oldWALs 2023-07-24 18:11:26,056 INFO [RS:1;jenkins-hbase4:46835] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46835%2C1690222274357:(num 1690222275267) 2023-07-24 18:11:26,056 DEBUG [RS:1;jenkins-hbase4:46835] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:26,056 INFO [RS:1;jenkins-hbase4:46835] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:26,056 INFO [RS:1;jenkins-hbase4:46835] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:26,057 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:26,057 INFO [RS:1;jenkins-hbase4:46835] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46835 2023-07-24 18:11:26,060 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46835,1690222274357 2023-07-24 18:11:26,060 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:26,061 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46835,1690222274357] 2023-07-24 18:11:26,061 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46835,1690222274357; numProcessing=4 2023-07-24 18:11:26,062 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46835,1690222274357 already deleted, retry=false 2023-07-24 18:11:26,062 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46835,1690222274357 expired; onlineServers=0 2023-07-24 18:11:26,062 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33035,1690222274007' ***** 2023-07-24 18:11:26,062 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 18:11:26,062 DEBUG [M:0;jenkins-hbase4:33035] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58e8acd2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:26,062 INFO [M:0;jenkins-hbase4:33035] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:26,065 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:26,065 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:26,065 INFO [M:0;jenkins-hbase4:33035] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@267a8b58{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:11:26,065 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:26,065 INFO [M:0;jenkins-hbase4:33035] server.AbstractConnector(383): Stopped ServerConnector@263442ac{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:26,066 INFO [M:0;jenkins-hbase4:33035] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:26,066 INFO [M:0;jenkins-hbase4:33035] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@65d94138{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:26,067 INFO [M:0;jenkins-hbase4:33035] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@729672e5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:26,067 INFO [M:0;jenkins-hbase4:33035] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33035,1690222274007 2023-07-24 18:11:26,067 INFO [M:0;jenkins-hbase4:33035] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33035,1690222274007; all regions closed. 2023-07-24 18:11:26,067 DEBUG [M:0;jenkins-hbase4:33035] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:26,067 INFO [M:0;jenkins-hbase4:33035] master.HMaster(1491): Stopping master jetty server 2023-07-24 18:11:26,068 INFO [M:0;jenkins-hbase4:33035] server.AbstractConnector(383): Stopped ServerConnector@40c0c4ae{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:26,069 DEBUG [M:0;jenkins-hbase4:33035] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 18:11:26,069 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 18:11:26,069 DEBUG [M:0;jenkins-hbase4:33035] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 18:11:26,069 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222274995] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222274995,5,FailOnTimeoutGroup] 2023-07-24 18:11:26,069 INFO [M:0;jenkins-hbase4:33035] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 18:11:26,069 INFO [M:0;jenkins-hbase4:33035] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 18:11:26,069 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222274988] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222274988,5,FailOnTimeoutGroup] 2023-07-24 18:11:26,069 INFO [M:0;jenkins-hbase4:33035] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 18:11:26,069 DEBUG [M:0;jenkins-hbase4:33035] master.HMaster(1512): Stopping service threads 2023-07-24 18:11:26,069 INFO [M:0;jenkins-hbase4:33035] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 18:11:26,070 ERROR [M:0;jenkins-hbase4:33035] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 18:11:26,070 INFO [M:0;jenkins-hbase4:33035] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 18:11:26,070 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 18:11:26,070 DEBUG [M:0;jenkins-hbase4:33035] zookeeper.ZKUtil(398): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 18:11:26,070 WARN [M:0;jenkins-hbase4:33035] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 18:11:26,070 INFO [M:0;jenkins-hbase4:33035] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 18:11:26,070 INFO [M:0;jenkins-hbase4:33035] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 18:11:26,071 DEBUG [M:0;jenkins-hbase4:33035] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:11:26,071 INFO [M:0;jenkins-hbase4:33035] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:26,071 DEBUG [M:0;jenkins-hbase4:33035] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:26,071 DEBUG [M:0;jenkins-hbase4:33035] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:11:26,071 DEBUG [M:0;jenkins-hbase4:33035] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:26,071 INFO [M:0;jenkins-hbase4:33035] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=45.12 KB heapSize=55.80 KB 2023-07-24 18:11:26,083 INFO [M:0;jenkins-hbase4:33035] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=45.12 KB at sequenceid=1083 (bloomFilter=true), to=hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ec9b64435d5e42dcbe362d4edbf33558 2023-07-24 18:11:26,088 DEBUG [M:0;jenkins-hbase4:33035] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/ec9b64435d5e42dcbe362d4edbf33558 as hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ec9b64435d5e42dcbe362d4edbf33558 2023-07-24 18:11:26,094 INFO [M:0;jenkins-hbase4:33035] regionserver.HStore(1080): Added hdfs://localhost:44619/user/jenkins/test-data/52cf8dd8-83a9-df38-c6ae-6fd8da75fea9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/ec9b64435d5e42dcbe362d4edbf33558, entries=15, sequenceid=1083, filesize=6.9 K 2023-07-24 18:11:26,095 INFO [M:0;jenkins-hbase4:33035] regionserver.HRegion(2948): Finished flush of dataSize ~45.12 KB/46202, heapSize ~55.78 KB/57120, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=1083, compaction requested=true 2023-07-24 18:11:26,099 INFO [M:0;jenkins-hbase4:33035] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:26,099 DEBUG [M:0;jenkins-hbase4:33035] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:11:26,107 INFO [M:0;jenkins-hbase4:33035] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 18:11:26,107 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:26,107 INFO [M:0;jenkins-hbase4:33035] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33035 2023-07-24 18:11:26,109 DEBUG [M:0;jenkins-hbase4:33035] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33035,1690222274007 already deleted, retry=false 2023-07-24 18:11:26,432 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:26,432 INFO [M:0;jenkins-hbase4:33035] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33035,1690222274007; zookeeper connection closed. 2023-07-24 18:11:26,432 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): master:33035-0x101988716b4001c, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:26,532 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:26,532 INFO [RS:1;jenkins-hbase4:46835] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46835,1690222274357; zookeeper connection closed. 2023-07-24 18:11:26,532 DEBUG [Listener at localhost/44627-EventThread] zookeeper.ZKWatcher(600): regionserver:46835-0x101988716b4001e, quorum=127.0.0.1:59012, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:26,533 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@525397b7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@525397b7 2023-07-24 18:11:26,533 INFO [Listener at localhost/44627] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-24 18:11:26,533 WARN [Listener at localhost/44627] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:11:26,539 INFO [Listener at localhost/44627] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:26,642 WARN [BP-938617020-172.31.14.131-1690222233780 heartbeating to localhost/127.0.0.1:44619] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:11:26,642 WARN [BP-938617020-172.31.14.131-1690222233780 heartbeating to localhost/127.0.0.1:44619] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-938617020-172.31.14.131-1690222233780 (Datanode Uuid 616d8a0c-3e9f-456b-9225-0e95e7fa9e0e) service to localhost/127.0.0.1:44619 2023-07-24 18:11:26,643 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data5/current/BP-938617020-172.31.14.131-1690222233780] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:26,644 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data6/current/BP-938617020-172.31.14.131-1690222233780] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:26,646 WARN [Listener at localhost/44627] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:11:26,650 INFO [Listener at localhost/44627] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:26,757 WARN [BP-938617020-172.31.14.131-1690222233780 heartbeating to localhost/127.0.0.1:44619] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:11:26,757 WARN [BP-938617020-172.31.14.131-1690222233780 heartbeating to localhost/127.0.0.1:44619] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-938617020-172.31.14.131-1690222233780 (Datanode Uuid 182d4e07-339c-40db-baf0-22f3a970020f) service to localhost/127.0.0.1:44619 2023-07-24 18:11:26,758 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data3/current/BP-938617020-172.31.14.131-1690222233780] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:26,758 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data4/current/BP-938617020-172.31.14.131-1690222233780] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:26,760 WARN [Listener at localhost/44627] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:11:26,762 INFO [Listener at localhost/44627] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:26,866 WARN [BP-938617020-172.31.14.131-1690222233780 heartbeating to localhost/127.0.0.1:44619] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:11:26,866 WARN [BP-938617020-172.31.14.131-1690222233780 heartbeating to localhost/127.0.0.1:44619] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-938617020-172.31.14.131-1690222233780 (Datanode Uuid ba5cd861-1676-4644-93dc-b5fe8d8e848d) service to localhost/127.0.0.1:44619 2023-07-24 18:11:26,867 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data1/current/BP-938617020-172.31.14.131-1690222233780] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:26,867 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4f8cc2ad-e119-e155-390b-0696b5b1230e/cluster_fa45b89e-e73a-a9aa-1da3-e40f68b74d6c/dfs/data/data2/current/BP-938617020-172.31.14.131-1690222233780] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:26,896 INFO [Listener at localhost/44627] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:27,022 INFO [Listener at localhost/44627] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 18:11:27,073 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-24 18:11:27,107 INFO [Listener at localhost/44627] hbase.HBaseTestingUtility(1293): Minicluster is down