2023-07-13 15:16:00,776 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b 2023-07-13 15:16:00,793 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics timeout: 13 mins 2023-07-13 15:16:00,808 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 15:16:00,809 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b, deleteOnExit=true 2023-07-13 15:16:00,809 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 15:16:00,809 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/test.cache.data in system properties and HBase conf 2023-07-13 15:16:00,810 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 15:16:00,810 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir in system properties and HBase conf 2023-07-13 15:16:00,811 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 15:16:00,811 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 15:16:00,811 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 15:16:00,943 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-13 15:16:01,363 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 15:16:01,369 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 15:16:01,370 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 15:16:01,370 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 15:16:01,371 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 15:16:01,371 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 15:16:01,372 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 15:16:01,372 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 15:16:01,373 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 15:16:01,374 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 15:16:01,375 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/nfs.dump.dir in system properties and HBase conf 2023-07-13 15:16:01,375 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir in system properties and HBase conf 2023-07-13 15:16:01,376 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 15:16:01,376 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 15:16:01,376 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 15:16:01,956 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 15:16:01,961 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 15:16:02,299 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-13 15:16:02,487 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-13 15:16:02,500 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:02,534 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:02,578 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/Jetty_localhost_42665_hdfs____.eblafz/webapp 2023-07-13 15:16:02,743 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42665 2023-07-13 15:16:02,786 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 15:16:02,786 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 15:16:03,262 WARN [Listener at localhost/36199] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:03,368 WARN [Listener at localhost/36199] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:16:03,389 WARN [Listener at localhost/36199] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:03,396 INFO [Listener at localhost/36199] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:03,402 INFO [Listener at localhost/36199] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/Jetty_localhost_37089_datanode____y6sjw4/webapp 2023-07-13 15:16:03,521 INFO [Listener at localhost/36199] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37089 2023-07-13 15:16:03,952 WARN [Listener at localhost/37193] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:03,979 WARN [Listener at localhost/37193] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:16:03,984 WARN [Listener at localhost/37193] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:03,986 INFO [Listener at localhost/37193] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:03,991 INFO [Listener at localhost/37193] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/Jetty_localhost_44703_datanode____.2q6p0f/webapp 2023-07-13 15:16:04,093 INFO [Listener at localhost/37193] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44703 2023-07-13 15:16:04,103 WARN [Listener at localhost/41473] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:04,156 WARN [Listener at localhost/41473] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 15:16:04,165 WARN [Listener at localhost/41473] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 15:16:04,168 INFO [Listener at localhost/41473] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 15:16:04,185 INFO [Listener at localhost/41473] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/Jetty_localhost_35217_datanode____r3y1b1/webapp 2023-07-13 15:16:04,322 INFO [Listener at localhost/41473] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35217 2023-07-13 15:16:04,334 WARN [Listener at localhost/35161] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 15:16:04,555 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1e7242a35eec9173: Processing first storage report for DS-b49739d6-b46c-4b6e-a2b8-71840a57307d from datanode 4dd20a9a-0b40-4e00-95d7-124a813012b0 2023-07-13 15:16:04,556 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1e7242a35eec9173: from storage DS-b49739d6-b46c-4b6e-a2b8-71840a57307d node DatanodeRegistration(127.0.0.1:33357, datanodeUuid=4dd20a9a-0b40-4e00-95d7-124a813012b0, infoPort=43975, infoSecurePort=0, ipcPort=37193, storageInfo=lv=-57;cid=testClusterID;nsid=1091901744;c=1689261362039), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 15:16:04,557 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa08254912871a0d4: Processing first storage report for DS-446258f2-9f59-4682-ba62-dd8f3f96d844 from datanode 9c2b5109-f291-4013-9d71-f50c6e2558e9 2023-07-13 15:16:04,557 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa08254912871a0d4: from storage DS-446258f2-9f59-4682-ba62-dd8f3f96d844 node DatanodeRegistration(127.0.0.1:37767, datanodeUuid=9c2b5109-f291-4013-9d71-f50c6e2558e9, infoPort=40189, infoSecurePort=0, ipcPort=41473, storageInfo=lv=-57;cid=testClusterID;nsid=1091901744;c=1689261362039), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:04,557 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x48e4de376ca4ca08: Processing first storage report for DS-9c83551c-8838-49a7-8254-8997fa3f68f2 from datanode b7a0296c-a721-427d-a6be-dd83259197a1 2023-07-13 15:16:04,557 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x48e4de376ca4ca08: from storage DS-9c83551c-8838-49a7-8254-8997fa3f68f2 node DatanodeRegistration(127.0.0.1:38723, datanodeUuid=b7a0296c-a721-427d-a6be-dd83259197a1, infoPort=43481, infoSecurePort=0, ipcPort=35161, storageInfo=lv=-57;cid=testClusterID;nsid=1091901744;c=1689261362039), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:04,557 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1e7242a35eec9173: Processing first storage report for DS-803ed538-8f67-4860-bc4c-5b94a0dc631f from datanode 4dd20a9a-0b40-4e00-95d7-124a813012b0 2023-07-13 15:16:04,558 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1e7242a35eec9173: from storage DS-803ed538-8f67-4860-bc4c-5b94a0dc631f node DatanodeRegistration(127.0.0.1:33357, datanodeUuid=4dd20a9a-0b40-4e00-95d7-124a813012b0, infoPort=43975, infoSecurePort=0, ipcPort=37193, storageInfo=lv=-57;cid=testClusterID;nsid=1091901744;c=1689261362039), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 15:16:04,558 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa08254912871a0d4: Processing first storage report for DS-bd462ef0-1cf0-4ec7-9ff9-05fd2b3f2c47 from datanode 9c2b5109-f291-4013-9d71-f50c6e2558e9 2023-07-13 15:16:04,558 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa08254912871a0d4: from storage DS-bd462ef0-1cf0-4ec7-9ff9-05fd2b3f2c47 node DatanodeRegistration(127.0.0.1:37767, datanodeUuid=9c2b5109-f291-4013-9d71-f50c6e2558e9, infoPort=40189, infoSecurePort=0, ipcPort=41473, storageInfo=lv=-57;cid=testClusterID;nsid=1091901744;c=1689261362039), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:04,558 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x48e4de376ca4ca08: Processing first storage report for DS-dfa9ff9f-0458-4920-8997-2d12b011f2a7 from datanode b7a0296c-a721-427d-a6be-dd83259197a1 2023-07-13 15:16:04,558 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x48e4de376ca4ca08: from storage DS-dfa9ff9f-0458-4920-8997-2d12b011f2a7 node DatanodeRegistration(127.0.0.1:38723, datanodeUuid=b7a0296c-a721-427d-a6be-dd83259197a1, infoPort=43481, infoSecurePort=0, ipcPort=35161, storageInfo=lv=-57;cid=testClusterID;nsid=1091901744;c=1689261362039), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 15:16:04,750 DEBUG [Listener at localhost/35161] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b 2023-07-13 15:16:04,820 INFO [Listener at localhost/35161] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/zookeeper_0, clientPort=56695, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 15:16:04,833 INFO [Listener at localhost/35161] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56695 2023-07-13 15:16:04,841 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:04,844 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:05,492 INFO [Listener at localhost/35161] util.FSUtils(471): Created version file at hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 with version=8 2023-07-13 15:16:05,492 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/hbase-staging 2023-07-13 15:16:05,502 DEBUG [Listener at localhost/35161] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 15:16:05,502 DEBUG [Listener at localhost/35161] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 15:16:05,502 DEBUG [Listener at localhost/35161] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 15:16:05,502 DEBUG [Listener at localhost/35161] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 15:16:06,013 INFO [Listener at localhost/35161] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-13 15:16:06,576 INFO [Listener at localhost/35161] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:06,616 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:06,617 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:06,617 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:06,617 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:06,618 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:06,799 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:06,900 DEBUG [Listener at localhost/35161] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-13 15:16:06,994 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38141 2023-07-13 15:16:07,007 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:07,008 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:07,031 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38141 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:07,075 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:381410x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:07,078 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38141-0x1015f4159470000 connected 2023-07-13 15:16:07,106 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:07,107 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:07,111 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:07,120 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38141 2023-07-13 15:16:07,120 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38141 2023-07-13 15:16:07,121 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38141 2023-07-13 15:16:07,121 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38141 2023-07-13 15:16:07,122 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38141 2023-07-13 15:16:07,154 INFO [Listener at localhost/35161] log.Log(170): Logging initialized @7153ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-13 15:16:07,314 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:07,315 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:07,315 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:07,317 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 15:16:07,317 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:07,317 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:07,321 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:07,385 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 32963 2023-07-13 15:16:07,387 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:07,418 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:07,421 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f33f59f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:07,422 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:07,422 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6d756d8e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:07,591 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:07,603 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:07,603 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:07,605 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:07,611 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:07,636 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3dee0740{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-32963-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1919745850398630928/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:07,648 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@6449840d{HTTP/1.1, (http/1.1)}{0.0.0.0:32963} 2023-07-13 15:16:07,648 INFO [Listener at localhost/35161] server.Server(415): Started @7648ms 2023-07-13 15:16:07,652 INFO [Listener at localhost/35161] master.HMaster(444): hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046, hbase.cluster.distributed=false 2023-07-13 15:16:07,728 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:07,728 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:07,728 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:07,728 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:07,729 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:07,729 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:07,735 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:07,738 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33725 2023-07-13 15:16:07,742 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:07,749 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:07,750 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:07,752 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:07,753 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33725 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:07,758 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:337250x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:07,759 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33725-0x1015f4159470001 connected 2023-07-13 15:16:07,760 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:07,761 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:07,762 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:07,762 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33725 2023-07-13 15:16:07,762 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33725 2023-07-13 15:16:07,763 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33725 2023-07-13 15:16:07,764 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33725 2023-07-13 15:16:07,764 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33725 2023-07-13 15:16:07,767 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:07,767 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:07,767 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:07,767 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:07,768 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:07,768 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:07,768 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:07,771 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 41925 2023-07-13 15:16:07,771 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:07,774 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:07,774 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69738833{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:07,775 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:07,775 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7ea3cad5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:07,898 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:07,899 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:07,900 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:07,900 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:07,903 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:07,908 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5a679f9e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-41925-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4583556884326284075/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:07,910 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@25eab107{HTTP/1.1, (http/1.1)}{0.0.0.0:41925} 2023-07-13 15:16:07,910 INFO [Listener at localhost/35161] server.Server(415): Started @7910ms 2023-07-13 15:16:07,927 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:07,927 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:07,927 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:07,928 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:07,928 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:07,928 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:07,929 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:07,932 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34275 2023-07-13 15:16:07,932 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:07,935 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:07,936 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:07,939 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:07,940 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34275 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:07,943 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:342750x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:07,945 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:342750x0, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:07,945 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34275-0x1015f4159470002 connected 2023-07-13 15:16:07,946 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:07,947 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:07,949 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34275 2023-07-13 15:16:07,949 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34275 2023-07-13 15:16:07,949 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34275 2023-07-13 15:16:07,951 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34275 2023-07-13 15:16:07,951 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34275 2023-07-13 15:16:07,953 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:07,954 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:07,954 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:07,954 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:07,954 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:07,955 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:07,955 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:07,955 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 46637 2023-07-13 15:16:07,955 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:07,962 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:07,962 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@425bb8be{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:07,963 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:07,963 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@38834d1b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:08,098 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:08,099 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:08,100 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:08,100 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:08,103 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:08,104 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7c8e2ea{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-46637-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2274133414091071621/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:08,105 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@12e45025{HTTP/1.1, (http/1.1)}{0.0.0.0:46637} 2023-07-13 15:16:08,106 INFO [Listener at localhost/35161] server.Server(415): Started @8105ms 2023-07-13 15:16:08,120 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:08,120 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:08,120 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:08,120 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:08,120 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:08,120 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:08,120 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:08,122 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36737 2023-07-13 15:16:08,122 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:08,123 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:08,125 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:08,126 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:08,127 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36737 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:08,130 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:367370x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:08,132 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36737-0x1015f4159470003 connected 2023-07-13 15:16:08,132 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:08,133 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:08,133 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:08,134 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36737 2023-07-13 15:16:08,134 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36737 2023-07-13 15:16:08,138 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36737 2023-07-13 15:16:08,139 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36737 2023-07-13 15:16:08,139 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36737 2023-07-13 15:16:08,142 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:08,142 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:08,142 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:08,143 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:08,143 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:08,143 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:08,144 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:08,145 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 35029 2023-07-13 15:16:08,145 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:08,151 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:08,151 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@50d1e028{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:08,152 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:08,152 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ae3c253{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:08,279 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:08,280 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:08,281 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:08,281 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:08,283 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:08,284 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4ad0acc{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-35029-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5880908500957541066/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:08,286 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@5a58295b{HTTP/1.1, (http/1.1)}{0.0.0.0:35029} 2023-07-13 15:16:08,286 INFO [Listener at localhost/35161] server.Server(415): Started @8286ms 2023-07-13 15:16:08,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7082f2b7{HTTP/1.1, (http/1.1)}{0.0.0.0:44375} 2023-07-13 15:16:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8315ms 2023-07-13 15:16:08,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:08,328 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:08,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:08,350 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:08,351 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:08,351 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:08,350 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:08,352 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:08,353 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:08,354 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38141,1689261365700 from backup master directory 2023-07-13 15:16:08,354 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:08,359 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:08,359 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:08,360 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:08,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:08,367 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-13 15:16:08,369 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-13 15:16:08,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/hbase.id with ID: d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:08,541 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:08,558 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:08,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3f16ae4c to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:08,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3aa91a14, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:08,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:08,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 15:16:08,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-13 15:16:08,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-13 15:16:08,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:08,691 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:08,693 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:08,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store-tmp 2023-07-13 15:16:08,816 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:08,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:16:08,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:08,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:08,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:16:08,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:08,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:08,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:08,819 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:08,851 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38141%2C1689261365700, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,38141,1689261365700, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/oldWALs, maxLogs=10 2023-07-13 15:16:08,909 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:08,909 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:08,909 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:08,917 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:08,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,38141,1689261365700/jenkins-hbase4.apache.org%2C38141%2C1689261365700.1689261368864 2023-07-13 15:16:08,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:08,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:08,990 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:08,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:08,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:09,065 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:09,072 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 15:16:09,100 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 15:16:09,114 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:09,120 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:09,122 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:09,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:09,144 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:09,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10253203680, jitterRate=-0.04509599506855011}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:09,145 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:09,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 15:16:09,171 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 15:16:09,171 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 15:16:09,174 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 15:16:09,176 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-13 15:16:09,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 48 msec 2023-07-13 15:16:09,225 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 15:16:09,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 15:16:09,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 15:16:09,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 15:16:09,278 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 15:16:09,283 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 15:16:09,286 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:09,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 15:16:09,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 15:16:09,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 15:16:09,307 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:09,307 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:09,307 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:09,307 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:09,307 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:09,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38141,1689261365700, sessionid=0x1015f4159470000, setting cluster-up flag (Was=false) 2023-07-13 15:16:09,329 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:09,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 15:16:09,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:09,342 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:09,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 15:16:09,350 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:09,353 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.hbase-snapshot/.tmp 2023-07-13 15:16:09,397 INFO [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:09,397 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:09,397 INFO [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:09,406 DEBUG [RS:1;jenkins-hbase4:34275] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:09,406 DEBUG [RS:2;jenkins-hbase4:36737] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:09,406 DEBUG [RS:0;jenkins-hbase4:33725] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:09,417 DEBUG [RS:2;jenkins-hbase4:36737] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:09,417 DEBUG [RS:1;jenkins-hbase4:34275] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:09,417 DEBUG [RS:0;jenkins-hbase4:33725] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:09,417 DEBUG [RS:1;jenkins-hbase4:34275] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:09,417 DEBUG [RS:2;jenkins-hbase4:36737] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:09,417 DEBUG [RS:0;jenkins-hbase4:33725] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:09,423 DEBUG [RS:2;jenkins-hbase4:36737] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:09,424 DEBUG [RS:2;jenkins-hbase4:36737] zookeeper.ReadOnlyZKClient(139): Connect 0x021739ce to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:09,426 DEBUG [RS:1;jenkins-hbase4:34275] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:09,426 DEBUG [RS:0;jenkins-hbase4:33725] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:09,435 DEBUG [RS:0;jenkins-hbase4:33725] zookeeper.ReadOnlyZKClient(139): Connect 0x381f9eb6 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:09,436 DEBUG [RS:1;jenkins-hbase4:34275] zookeeper.ReadOnlyZKClient(139): Connect 0x4d641f4d to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:09,456 DEBUG [RS:2;jenkins-hbase4:36737] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b532efe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:09,457 DEBUG [RS:2;jenkins-hbase4:36737] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a4d19ce, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:09,455 DEBUG [RS:1;jenkins-hbase4:34275] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38f9262c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:09,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 15:16:09,458 DEBUG [RS:1;jenkins-hbase4:34275] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68faa2af, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:09,463 DEBUG [RS:0;jenkins-hbase4:33725] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d34edf2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:09,463 DEBUG [RS:0;jenkins-hbase4:33725] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4be88d39, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:09,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 15:16:09,475 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:09,478 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 15:16:09,478 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 15:16:09,487 DEBUG [RS:1;jenkins-hbase4:34275] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34275 2023-07-13 15:16:09,490 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36737 2023-07-13 15:16:09,490 DEBUG [RS:0;jenkins-hbase4:33725] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33725 2023-07-13 15:16:09,495 INFO [RS:0;jenkins-hbase4:33725] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:09,495 INFO [RS:1;jenkins-hbase4:34275] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:09,499 INFO [RS:1;jenkins-hbase4:34275] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:09,495 INFO [RS:2;jenkins-hbase4:36737] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:09,499 DEBUG [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:09,499 INFO [RS:0;jenkins-hbase4:33725] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:09,500 DEBUG [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:09,499 INFO [RS:2;jenkins-hbase4:36737] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:09,500 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:09,504 INFO [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38141,1689261365700 with isa=jenkins-hbase4.apache.org/172.31.14.131:33725, startcode=1689261367727 2023-07-13 15:16:09,504 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38141,1689261365700 with isa=jenkins-hbase4.apache.org/172.31.14.131:36737, startcode=1689261368119 2023-07-13 15:16:09,504 INFO [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38141,1689261365700 with isa=jenkins-hbase4.apache.org/172.31.14.131:34275, startcode=1689261367926 2023-07-13 15:16:09,527 DEBUG [RS:1;jenkins-hbase4:34275] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:09,529 DEBUG [RS:0;jenkins-hbase4:33725] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:09,527 DEBUG [RS:2;jenkins-hbase4:36737] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:09,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:09,598 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35031, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:09,598 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33713, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:09,598 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32833, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:09,609 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:09,622 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:09,624 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:09,652 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 15:16:09,652 DEBUG [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 15:16:09,652 DEBUG [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 15:16:09,652 WARN [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 15:16:09,652 WARN [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 15:16:09,652 WARN [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 15:16:09,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:09,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:09,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:09,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:09,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:09,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:09,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:09,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:09,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-13 15:16:09,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:09,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:09,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:09,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689261399665 2023-07-13 15:16:09,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 15:16:09,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 15:16:09,674 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:09,675 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 15:16:09,677 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:09,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 15:16:09,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 15:16:09,684 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 15:16:09,684 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 15:16:09,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:09,690 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 15:16:09,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 15:16:09,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 15:16:09,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 15:16:09,695 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 15:16:09,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261369697,5,FailOnTimeoutGroup] 2023-07-13 15:16:09,697 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261369697,5,FailOnTimeoutGroup] 2023-07-13 15:16:09,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:09,697 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 15:16:09,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:09,700 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:09,754 INFO [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38141,1689261365700 with isa=jenkins-hbase4.apache.org/172.31.14.131:34275, startcode=1689261367926 2023-07-13 15:16:09,754 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38141,1689261365700 with isa=jenkins-hbase4.apache.org/172.31.14.131:36737, startcode=1689261368119 2023-07-13 15:16:09,754 INFO [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38141,1689261365700 with isa=jenkins-hbase4.apache.org/172.31.14.131:33725, startcode=1689261367727 2023-07-13 15:16:09,763 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38141] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:09,768 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:09,768 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 15:16:09,779 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38141] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:09,779 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:09,780 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 15:16:09,781 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38141] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:09,782 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:09,782 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 15:16:09,782 DEBUG [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:09,782 DEBUG [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:09,783 DEBUG [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=32963 2023-07-13 15:16:09,784 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:09,784 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:09,784 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=32963 2023-07-13 15:16:09,784 DEBUG [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:09,784 DEBUG [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:09,784 DEBUG [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=32963 2023-07-13 15:16:09,792 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:09,793 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:09,794 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:09,794 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:09,797 DEBUG [RS:1;jenkins-hbase4:34275] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:09,809 WARN [RS:1;jenkins-hbase4:34275] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:09,799 DEBUG [RS:0;jenkins-hbase4:33725] zookeeper.ZKUtil(162): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:09,798 DEBUG [RS:2;jenkins-hbase4:36737] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:09,810 WARN [RS:0;jenkins-hbase4:33725] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:09,810 WARN [RS:2;jenkins-hbase4:36737] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:09,809 INFO [RS:1;jenkins-hbase4:34275] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:09,811 INFO [RS:2;jenkins-hbase4:36737] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:09,810 INFO [RS:0;jenkins-hbase4:33725] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:09,811 DEBUG [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:09,811 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33725,1689261367727] 2023-07-13 15:16:09,811 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36737,1689261368119] 2023-07-13 15:16:09,811 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34275,1689261367926] 2023-07-13 15:16:09,811 DEBUG [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:09,811 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:09,848 DEBUG [RS:1;jenkins-hbase4:34275] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:09,848 DEBUG [RS:0;jenkins-hbase4:33725] zookeeper.ZKUtil(162): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:09,849 DEBUG [RS:1;jenkins-hbase4:34275] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:09,849 DEBUG [RS:0;jenkins-hbase4:33725] zookeeper.ZKUtil(162): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:09,850 DEBUG [RS:0;jenkins-hbase4:33725] zookeeper.ZKUtil(162): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:09,850 DEBUG [RS:1;jenkins-hbase4:34275] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:09,861 DEBUG [RS:2;jenkins-hbase4:36737] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:09,862 DEBUG [RS:2;jenkins-hbase4:36737] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:09,869 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:09,869 DEBUG [RS:2;jenkins-hbase4:36737] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:09,874 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:09,877 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:09,878 DEBUG [RS:0;jenkins-hbase4:33725] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:09,878 DEBUG [RS:1;jenkins-hbase4:34275] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:09,878 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:09,878 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:09,880 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:09,880 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:09,888 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:09,889 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:09,895 INFO [RS:2;jenkins-hbase4:36737] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:09,897 INFO [RS:1;jenkins-hbase4:34275] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:09,898 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:09,897 INFO [RS:0;jenkins-hbase4:33725] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:09,899 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:09,902 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:09,902 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:09,903 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:09,905 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:09,907 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:09,911 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:09,915 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:09,922 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:09,923 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10448453440, jitterRate=-0.026911944150924683}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:09,923 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:09,923 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:09,923 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:09,923 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:09,923 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:09,923 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:09,926 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:09,926 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:09,933 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:09,934 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 15:16:09,936 INFO [RS:0;jenkins-hbase4:33725] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:09,936 INFO [RS:2;jenkins-hbase4:36737] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:09,936 INFO [RS:1;jenkins-hbase4:34275] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:09,989 INFO [RS:0;jenkins-hbase4:33725] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:09,989 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 15:16:09,990 INFO [RS:0;jenkins-hbase4:33725] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:09,989 INFO [RS:2;jenkins-hbase4:36737] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:09,989 INFO [RS:1;jenkins-hbase4:34275] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:09,997 INFO [RS:2;jenkins-hbase4:36737] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:09,997 INFO [RS:1;jenkins-hbase4:34275] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:09,997 INFO [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:09,998 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:10,002 INFO [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:10,011 INFO [RS:2;jenkins-hbase4:36737] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,011 INFO [RS:1;jenkins-hbase4:34275] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,012 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,013 INFO [RS:0;jenkins-hbase4:33725] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,013 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,012 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,013 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,013 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,013 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,013 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,014 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,014 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,014 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:10,014 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,014 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,014 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:10,014 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 15:16:10,013 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,014 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:2;jenkins-hbase4:36737] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,014 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:10,015 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,015 DEBUG [RS:1;jenkins-hbase4:34275] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,016 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,016 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,016 DEBUG [RS:0;jenkins-hbase4:33725] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:10,020 INFO [RS:2;jenkins-hbase4:36737] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,020 INFO [RS:2;jenkins-hbase4:36737] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,020 INFO [RS:2;jenkins-hbase4:36737] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,021 INFO [RS:0;jenkins-hbase4:33725] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,022 INFO [RS:0;jenkins-hbase4:33725] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,022 INFO [RS:1;jenkins-hbase4:34275] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,022 INFO [RS:0;jenkins-hbase4:33725] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,022 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 15:16:10,022 INFO [RS:1;jenkins-hbase4:34275] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,022 INFO [RS:1;jenkins-hbase4:34275] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,037 INFO [RS:0;jenkins-hbase4:33725] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:10,037 INFO [RS:1;jenkins-hbase4:34275] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:10,037 INFO [RS:2;jenkins-hbase4:36737] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:10,041 INFO [RS:1;jenkins-hbase4:34275] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34275,1689261367926-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,041 INFO [RS:0;jenkins-hbase4:33725] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33725,1689261367727-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,041 INFO [RS:2;jenkins-hbase4:36737] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36737,1689261368119-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,058 INFO [RS:2;jenkins-hbase4:36737] regionserver.Replication(203): jenkins-hbase4.apache.org,36737,1689261368119 started 2023-07-13 15:16:10,058 INFO [RS:1;jenkins-hbase4:34275] regionserver.Replication(203): jenkins-hbase4.apache.org,34275,1689261367926 started 2023-07-13 15:16:10,058 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36737,1689261368119, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36737, sessionid=0x1015f4159470003 2023-07-13 15:16:10,058 INFO [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34275,1689261367926, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34275, sessionid=0x1015f4159470002 2023-07-13 15:16:10,058 DEBUG [RS:2;jenkins-hbase4:36737] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:10,058 DEBUG [RS:1;jenkins-hbase4:34275] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:10,058 DEBUG [RS:2;jenkins-hbase4:36737] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:10,058 DEBUG [RS:1;jenkins-hbase4:34275] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:10,060 DEBUG [RS:1;jenkins-hbase4:34275] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34275,1689261367926' 2023-07-13 15:16:10,060 DEBUG [RS:2;jenkins-hbase4:36737] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36737,1689261368119' 2023-07-13 15:16:10,061 DEBUG [RS:2;jenkins-hbase4:36737] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:10,061 DEBUG [RS:1;jenkins-hbase4:34275] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:10,061 DEBUG [RS:2;jenkins-hbase4:36737] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:10,061 DEBUG [RS:1;jenkins-hbase4:34275] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:10,062 DEBUG [RS:2;jenkins-hbase4:36737] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:10,062 DEBUG [RS:2;jenkins-hbase4:36737] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:10,062 DEBUG [RS:1;jenkins-hbase4:34275] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:10,062 INFO [RS:0;jenkins-hbase4:33725] regionserver.Replication(203): jenkins-hbase4.apache.org,33725,1689261367727 started 2023-07-13 15:16:10,062 DEBUG [RS:2;jenkins-hbase4:36737] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:10,062 INFO [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33725,1689261367727, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33725, sessionid=0x1015f4159470001 2023-07-13 15:16:10,062 DEBUG [RS:1;jenkins-hbase4:34275] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:10,062 DEBUG [RS:0;jenkins-hbase4:33725] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:10,062 DEBUG [RS:0;jenkins-hbase4:33725] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:10,062 DEBUG [RS:0;jenkins-hbase4:33725] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33725,1689261367727' 2023-07-13 15:16:10,062 DEBUG [RS:0;jenkins-hbase4:33725] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:10,062 DEBUG [RS:1;jenkins-hbase4:34275] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:10,062 DEBUG [RS:2;jenkins-hbase4:36737] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36737,1689261368119' 2023-07-13 15:16:10,063 DEBUG [RS:1;jenkins-hbase4:34275] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34275,1689261367926' 2023-07-13 15:16:10,063 DEBUG [RS:1;jenkins-hbase4:34275] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:10,063 DEBUG [RS:2;jenkins-hbase4:36737] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:10,063 DEBUG [RS:0;jenkins-hbase4:33725] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:10,063 DEBUG [RS:1;jenkins-hbase4:34275] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:10,063 DEBUG [RS:0;jenkins-hbase4:33725] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:10,063 DEBUG [RS:2;jenkins-hbase4:36737] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:10,064 DEBUG [RS:0;jenkins-hbase4:33725] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:10,064 DEBUG [RS:0;jenkins-hbase4:33725] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:10,064 DEBUG [RS:0;jenkins-hbase4:33725] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33725,1689261367727' 2023-07-13 15:16:10,064 DEBUG [RS:0;jenkins-hbase4:33725] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:10,064 DEBUG [RS:1;jenkins-hbase4:34275] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:10,064 INFO [RS:1;jenkins-hbase4:34275] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:10,064 DEBUG [RS:2;jenkins-hbase4:36737] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:10,064 INFO [RS:1;jenkins-hbase4:34275] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:10,064 DEBUG [RS:0;jenkins-hbase4:33725] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:10,064 INFO [RS:2;jenkins-hbase4:36737] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:10,064 INFO [RS:2;jenkins-hbase4:36737] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:10,065 DEBUG [RS:0;jenkins-hbase4:33725] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:10,065 INFO [RS:0;jenkins-hbase4:33725] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:10,065 INFO [RS:0;jenkins-hbase4:33725] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:10,176 DEBUG [jenkins-hbase4:38141] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 15:16:10,177 INFO [RS:1;jenkins-hbase4:34275] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34275%2C1689261367926, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,34275,1689261367926, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:10,177 INFO [RS:0;jenkins-hbase4:33725] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33725%2C1689261367727, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33725,1689261367727, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:10,177 INFO [RS:2;jenkins-hbase4:36737] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36737%2C1689261368119, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:10,190 DEBUG [jenkins-hbase4:38141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:10,191 DEBUG [jenkins-hbase4:38141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:10,191 DEBUG [jenkins-hbase4:38141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:10,191 DEBUG [jenkins-hbase4:38141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:10,191 DEBUG [jenkins-hbase4:38141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:10,195 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34275,1689261367926, state=OPENING 2023-07-13 15:16:10,205 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:10,205 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:10,205 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:10,215 INFO [RS:0;jenkins-hbase4:33725] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33725,1689261367727/jenkins-hbase4.apache.org%2C33725%2C1689261367727.1689261370185 2023-07-13 15:16:10,217 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 15:16:10,219 DEBUG [RS:0;jenkins-hbase4:33725] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK]] 2023-07-13 15:16:10,219 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:10,223 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:10,223 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:10,226 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:10,231 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:10,232 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:10,232 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:10,232 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:10,233 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:10,243 INFO [RS:2;jenkins-hbase4:36737] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119/jenkins-hbase4.apache.org%2C36737%2C1689261368119.1689261370185 2023-07-13 15:16:10,243 INFO [RS:1;jenkins-hbase4:34275] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,34275,1689261367926/jenkins-hbase4.apache.org%2C34275%2C1689261367926.1689261370185 2023-07-13 15:16:10,243 DEBUG [RS:2;jenkins-hbase4:36737] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK]] 2023-07-13 15:16:10,244 DEBUG [RS:1;jenkins-hbase4:34275] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK]] 2023-07-13 15:16:10,290 WARN [ReadOnlyZKClient-127.0.0.1:56695@0x3f16ae4c] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 15:16:10,316 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38141,1689261365700] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:10,320 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52190, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:10,321 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34275] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:52190 deadline: 1689261430321, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:10,415 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:10,419 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:10,424 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52200, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:10,438 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 15:16:10,439 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:10,442 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34275%2C1689261367926.meta, suffix=.meta, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,34275,1689261367926, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:10,464 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:10,465 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:10,466 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:10,472 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,34275,1689261367926/jenkins-hbase4.apache.org%2C34275%2C1689261367926.meta.1689261370444.meta 2023-07-13 15:16:10,473 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:10,473 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:10,475 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:10,478 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 15:16:10,480 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 15:16:10,486 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 15:16:10,486 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:10,486 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 15:16:10,486 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 15:16:10,489 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:10,491 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:10,491 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:10,491 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:10,492 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:10,492 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:10,493 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:10,494 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:10,494 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:10,495 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:10,495 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:10,496 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:10,496 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:10,497 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:10,498 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:10,499 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:10,502 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:10,512 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:10,515 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:10,516 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10121721440, jitterRate=-0.057341232895851135}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:10,516 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:10,530 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689261370412 2023-07-13 15:16:10,551 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34275,1689261367926, state=OPEN 2023-07-13 15:16:10,553 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 15:16:10,554 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 15:16:10,555 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:10,555 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:10,560 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 15:16:10,560 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34275,1689261367926 in 329 msec 2023-07-13 15:16:10,566 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 15:16:10,566 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 573 msec 2023-07-13 15:16:10,588 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0930 sec 2023-07-13 15:16:10,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689261370588, completionTime=-1 2023-07-13 15:16:10,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 15:16:10,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 15:16:10,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 15:16:10,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689261430642 2023-07-13 15:16:10,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689261490642 2023-07-13 15:16:10,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 53 msec 2023-07-13 15:16:10,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38141,1689261365700-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38141,1689261365700-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38141,1689261365700-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38141, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:10,666 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 15:16:10,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 15:16:10,678 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:10,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 15:16:10,699 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:10,701 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:10,718 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:10,721 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 empty. 2023-07-13 15:16:10,721 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:10,721 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 15:16:10,763 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:10,765 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8f9b3c3c0c701a7e057738cfe2a31027, NAME => 'hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:10,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:10,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8f9b3c3c0c701a7e057738cfe2a31027, disabling compactions & flushes 2023-07-13 15:16:10,786 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:10,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:10,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. after waiting 0 ms 2023-07-13 15:16:10,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:10,786 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:10,786 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:10,790 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:10,807 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261370793"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261370793"}]},"ts":"1689261370793"} 2023-07-13 15:16:10,836 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:10,837 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38141,1689261365700] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:10,840 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:10,840 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38141,1689261365700] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 15:16:10,843 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:10,845 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:10,847 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261370840"}]},"ts":"1689261370840"} 2023-07-13 15:16:10,850 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:10,851 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 15:16:10,851 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 empty. 2023-07-13 15:16:10,852 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:10,852 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 15:16:10,856 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:10,856 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:10,856 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:10,856 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:10,856 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:10,858 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN}] 2023-07-13 15:16:10,861 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN 2023-07-13 15:16:10,863 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34275,1689261367926; forceNewPlan=false, retain=false 2023-07-13 15:16:10,876 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:10,878 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 111352044b1bd403da18db964c499c82, NAME => 'hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:10,902 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:10,902 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 111352044b1bd403da18db964c499c82, disabling compactions & flushes 2023-07-13 15:16:10,903 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:10,903 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:10,903 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. after waiting 0 ms 2023-07-13 15:16:10,903 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:10,903 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:10,903 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:10,907 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:10,909 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261370909"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261370909"}]},"ts":"1689261370909"} 2023-07-13 15:16:10,913 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:10,919 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:10,919 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261370919"}]},"ts":"1689261370919"} 2023-07-13 15:16:10,925 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 15:16:10,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:10,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:10,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:10,931 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:10,932 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:10,932 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN}] 2023-07-13 15:16:10,936 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN 2023-07-13 15:16:10,938 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33725,1689261367727; forceNewPlan=false, retain=false 2023-07-13 15:16:10,939 INFO [jenkins-hbase4:38141] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-13 15:16:10,941 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:10,942 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261370941"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261370941"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261370941"}]},"ts":"1689261370941"} 2023-07-13 15:16:10,947 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:10,947 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261370947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261370947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261370947"}]},"ts":"1689261370947"} 2023-07-13 15:16:10,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:10,952 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,33725,1689261367727}] 2023-07-13 15:16:11,107 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:11,108 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:11,110 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:11,111 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f9b3c3c0c701a7e057738cfe2a31027, NAME => 'hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:11,111 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37806, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:11,112 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:11,112 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:11,112 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:11,112 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:11,115 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:11,117 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:11,117 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:11,118 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f9b3c3c0c701a7e057738cfe2a31027 columnFamilyName info 2023-07-13 15:16:11,118 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:11,118 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 111352044b1bd403da18db964c499c82, NAME => 'hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:11,119 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:11,119 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(310): Store=8f9b3c3c0c701a7e057738cfe2a31027/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:11,120 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. service=MultiRowMutationService 2023-07-13 15:16:11,120 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 15:16:11,120 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 111352044b1bd403da18db964c499c82 2023-07-13 15:16:11,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:11,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:11,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:11,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:11,122 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:11,123 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 111352044b1bd403da18db964c499c82 2023-07-13 15:16:11,125 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:11,125 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:11,126 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 111352044b1bd403da18db964c499c82 columnFamilyName m 2023-07-13 15:16:11,126 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(310): Store=111352044b1bd403da18db964c499c82/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:11,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:11,128 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:11,129 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:11,130 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:11,131 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8f9b3c3c0c701a7e057738cfe2a31027; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9929614560, jitterRate=-0.07523258030414581}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:11,131 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:11,133 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:11,133 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027., pid=8, masterSystemTime=1689261371102 2023-07-13 15:16:11,137 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:11,137 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:11,138 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:11,139 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 111352044b1bd403da18db964c499c82; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@11542b55, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:11,139 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:11,139 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:11,140 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261371139"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261371139"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261371139"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261371139"}]},"ts":"1689261371139"} 2023-07-13 15:16:11,140 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., pid=9, masterSystemTime=1689261371107 2023-07-13 15:16:11,144 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:11,145 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:11,146 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:11,148 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261371146"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261371146"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261371146"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261371146"}]},"ts":"1689261371146"} 2023-07-13 15:16:11,155 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-13 15:16:11,156 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,34275,1689261367926 in 202 msec 2023-07-13 15:16:11,160 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-13 15:16:11,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,33725,1689261367727 in 202 msec 2023-07-13 15:16:11,167 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-13 15:16:11,167 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN in 298 msec 2023-07-13 15:16:11,169 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:11,169 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261371169"}]},"ts":"1689261371169"} 2023-07-13 15:16:11,170 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-13 15:16:11,170 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN in 229 msec 2023-07-13 15:16:11,172 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 15:16:11,173 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:11,173 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261371173"}]},"ts":"1689261371173"} 2023-07-13 15:16:11,176 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:11,177 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 15:16:11,180 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 497 msec 2023-07-13 15:16:11,180 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:11,184 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 343 msec 2023-07-13 15:16:11,199 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 15:16:11,200 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:11,200 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:11,236 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 15:16:11,245 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38141,1689261365700] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:11,254 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:11,254 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37816, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:11,258 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 15:16:11,258 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 15:16:11,264 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 34 msec 2023-07-13 15:16:11,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 15:16:11,288 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:11,300 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 23 msec 2023-07-13 15:16:11,313 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 15:16:11,316 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 15:16:11,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.955sec 2023-07-13 15:16:11,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-13 15:16:11,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 15:16:11,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 15:16:11,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38141,1689261365700-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 15:16:11,325 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38141,1689261365700-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 15:16:11,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 15:16:11,338 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:11,338 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:11,342 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:11,349 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 15:16:11,403 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(139): Connect 0x7c41c5f7 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:11,408 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3c8faac9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:11,423 DEBUG [hconnection-0x50aa0278-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:11,434 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52216, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:11,446 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:11,447 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:11,456 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 15:16:11,460 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34824, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 15:16:11,473 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 15:16:11,473 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:11,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-13 15:16:11,479 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(139): Connect 0x2b6bba48 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:11,483 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@defc5ad, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:11,483 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:11,486 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:11,486 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015f415947000a connected 2023-07-13 15:16:11,517 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=425, OpenFileDescriptor=677, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=172, AvailableMemoryMB=4592 2023-07-13 15:16:11,519 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testClearNotProcessedDeadServer 2023-07-13 15:16:11,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:11,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:11,582 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 15:16:11,593 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:11,594 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:11,594 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:11,594 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:11,594 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:11,594 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:11,594 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:11,598 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41955 2023-07-13 15:16:11,598 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:11,599 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:11,601 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:11,605 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:11,608 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41955 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:11,612 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:419550x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:11,614 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41955-0x1015f415947000b connected 2023-07-13 15:16:11,614 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:11,615 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 15:16:11,616 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:11,618 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41955 2023-07-13 15:16:11,618 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41955 2023-07-13 15:16:11,618 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41955 2023-07-13 15:16:11,619 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41955 2023-07-13 15:16:11,619 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41955 2023-07-13 15:16:11,621 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:11,621 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:11,621 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:11,622 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:11,622 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:11,622 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:11,622 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:11,623 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 35539 2023-07-13 15:16:11,623 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:11,624 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:11,625 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4b22a6fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:11,625 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:11,625 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3b9dbcbd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:11,741 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:11,741 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:11,742 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:11,742 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:11,743 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:11,744 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2c20dc81{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-35539-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1135184029133900768/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:11,745 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@8ad847b{HTTP/1.1, (http/1.1)}{0.0.0.0:35539} 2023-07-13 15:16:11,745 INFO [Listener at localhost/35161] server.Server(415): Started @11745ms 2023-07-13 15:16:11,748 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:11,748 DEBUG [RS:3;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:11,757 DEBUG [RS:3;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:11,757 DEBUG [RS:3;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:11,761 DEBUG [RS:3;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:11,766 DEBUG [RS:3;jenkins-hbase4:41955] zookeeper.ReadOnlyZKClient(139): Connect 0x1702f866 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:11,774 DEBUG [RS:3;jenkins-hbase4:41955] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19c504f6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:11,775 DEBUG [RS:3;jenkins-hbase4:41955] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1dfa9316, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:11,784 DEBUG [RS:3;jenkins-hbase4:41955] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:41955 2023-07-13 15:16:11,784 INFO [RS:3;jenkins-hbase4:41955] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:11,784 INFO [RS:3;jenkins-hbase4:41955] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:11,784 DEBUG [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:11,785 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38141,1689261365700 with isa=jenkins-hbase4.apache.org/172.31.14.131:41955, startcode=1689261371593 2023-07-13 15:16:11,785 DEBUG [RS:3;jenkins-hbase4:41955] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:11,794 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45313, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:11,795 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38141] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:11,795 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:11,796 DEBUG [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:11,796 DEBUG [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:11,796 DEBUG [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=32963 2023-07-13 15:16:11,804 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:11,804 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:11,804 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:11,804 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:11,804 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:11,804 DEBUG [RS:3;jenkins-hbase4:41955] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:11,804 WARN [RS:3;jenkins-hbase4:41955] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:11,805 INFO [RS:3;jenkins-hbase4:41955] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:11,805 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41955,1689261371593] 2023-07-13 15:16:11,805 DEBUG [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:11,805 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:11,805 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:11,805 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:11,809 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:11,811 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:11,811 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:11,811 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:11,811 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 15:16:11,812 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:11,812 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:11,813 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:11,813 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:11,814 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:11,815 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:11,817 DEBUG [RS:3;jenkins-hbase4:41955] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:11,817 DEBUG [RS:3;jenkins-hbase4:41955] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:11,818 DEBUG [RS:3;jenkins-hbase4:41955] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:11,818 DEBUG [RS:3;jenkins-hbase4:41955] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:11,819 DEBUG [RS:3;jenkins-hbase4:41955] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:11,820 INFO [RS:3;jenkins-hbase4:41955] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:11,823 INFO [RS:3;jenkins-hbase4:41955] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:11,823 INFO [RS:3;jenkins-hbase4:41955] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:11,823 INFO [RS:3;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:11,823 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:11,825 INFO [RS:3;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:11,826 DEBUG [RS:3;jenkins-hbase4:41955] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:11,828 INFO [RS:3;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:11,828 INFO [RS:3;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:11,828 INFO [RS:3;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:11,839 INFO [RS:3;jenkins-hbase4:41955] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:11,839 INFO [RS:3;jenkins-hbase4:41955] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41955,1689261371593-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:11,850 INFO [RS:3;jenkins-hbase4:41955] regionserver.Replication(203): jenkins-hbase4.apache.org,41955,1689261371593 started 2023-07-13 15:16:11,851 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41955,1689261371593, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41955, sessionid=0x1015f415947000b 2023-07-13 15:16:11,851 DEBUG [RS:3;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:11,851 DEBUG [RS:3;jenkins-hbase4:41955] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:11,851 DEBUG [RS:3;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41955,1689261371593' 2023-07-13 15:16:11,851 DEBUG [RS:3;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:11,851 DEBUG [RS:3;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:11,852 DEBUG [RS:3;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:11,852 DEBUG [RS:3;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:11,852 DEBUG [RS:3;jenkins-hbase4:41955] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:11,852 DEBUG [RS:3;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41955,1689261371593' 2023-07-13 15:16:11,852 DEBUG [RS:3;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:11,852 DEBUG [RS:3;jenkins-hbase4:41955] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:11,853 DEBUG [RS:3;jenkins-hbase4:41955] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:11,853 INFO [RS:3;jenkins-hbase4:41955] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:11,853 INFO [RS:3;jenkins-hbase4:41955] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:11,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:11,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:11,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:11,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:11,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:11,869 DEBUG [hconnection-0x609dbbf-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:11,873 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52232, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:11,878 DEBUG [hconnection-0x609dbbf-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:11,880 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37818, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:11,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:11,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:11,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:11,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:11,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34824 deadline: 1689262571891, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:11,893 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:11,895 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:11,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:11,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:11,897 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33725, jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:11,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:11,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:11,903 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(260): testClearNotProcessedDeadServer 2023-07-13 15:16:11,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:11,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:11,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup deadServerGroup 2023-07-13 15:16:11,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:11,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:11,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-13 15:16:11,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:11,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:11,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:11,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:11,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33725] to rsgroup deadServerGroup 2023-07-13 15:16:11,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:11,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:11,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-13 15:16:11,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:11,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(238): Moving server region 111352044b1bd403da18db964c499c82, which do not belong to RSGroup deadServerGroup 2023-07-13 15:16:11,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:11,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:11,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:11,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:11,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:11,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE 2023-07-13 15:16:11,934 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE 2023-07-13 15:16:11,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 15:16:11,935 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:11,935 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261371935"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261371935"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261371935"}]},"ts":"1689261371935"} 2023-07-13 15:16:11,938 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,33725,1689261367727}] 2023-07-13 15:16:11,957 INFO [RS:3;jenkins-hbase4:41955] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41955%2C1689261371593, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,41955,1689261371593, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:11,980 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:11,980 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:11,981 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:11,985 INFO [RS:3;jenkins-hbase4:41955] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,41955,1689261371593/jenkins-hbase4.apache.org%2C41955%2C1689261371593.1689261371958 2023-07-13 15:16:11,987 DEBUG [RS:3;jenkins-hbase4:41955] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:12,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 111352044b1bd403da18db964c499c82 2023-07-13 15:16:12,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 111352044b1bd403da18db964c499c82, disabling compactions & flushes 2023-07-13 15:16:12,105 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:12,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:12,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. after waiting 0 ms 2023-07-13 15:16:12,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:12,106 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 111352044b1bd403da18db964c499c82 1/1 column families, dataSize=1.27 KB heapSize=2.24 KB 2023-07-13 15:16:12,209 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.27 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/9cc85fdd31c84a01b1065fb63289ca00 2023-07-13 15:16:12,267 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/9cc85fdd31c84a01b1065fb63289ca00 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/9cc85fdd31c84a01b1065fb63289ca00 2023-07-13 15:16:12,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/9cc85fdd31c84a01b1065fb63289ca00, entries=3, sequenceid=9, filesize=5.1 K 2023-07-13 15:16:12,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.27 KB/1298, heapSize ~2.23 KB/2280, currentSize=0 B/0 for 111352044b1bd403da18db964c499c82 in 176ms, sequenceid=9, compaction requested=false 2023-07-13 15:16:12,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 15:16:12,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-13 15:16:12,300 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:12,300 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:12,301 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:12,301 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 111352044b1bd403da18db964c499c82 move to jenkins-hbase4.apache.org,41955,1689261371593 record at close sequenceid=9 2023-07-13 15:16:12,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 111352044b1bd403da18db964c499c82 2023-07-13 15:16:12,305 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=CLOSED 2023-07-13 15:16:12,305 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261372305"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261372305"}]},"ts":"1689261372305"} 2023-07-13 15:16:12,311 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-13 15:16:12,312 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,33725,1689261367727 in 370 msec 2023-07-13 15:16:12,313 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41955,1689261371593; forceNewPlan=false, retain=false 2023-07-13 15:16:12,463 INFO [jenkins-hbase4:38141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:12,463 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:12,464 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261372463"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261372463"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261372463"}]},"ts":"1689261372463"} 2023-07-13 15:16:12,468 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,41955,1689261371593}] 2023-07-13 15:16:12,624 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:12,625 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:12,629 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48248, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:12,647 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:12,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 111352044b1bd403da18db964c499c82, NAME => 'hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:12,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:12,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. service=MultiRowMutationService 2023-07-13 15:16:12,648 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 15:16:12,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 111352044b1bd403da18db964c499c82 2023-07-13 15:16:12,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:12,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:12,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:12,651 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 111352044b1bd403da18db964c499c82 2023-07-13 15:16:12,652 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:12,652 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:12,653 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 111352044b1bd403da18db964c499c82 columnFamilyName m 2023-07-13 15:16:12,671 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/9cc85fdd31c84a01b1065fb63289ca00 2023-07-13 15:16:12,672 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(310): Store=111352044b1bd403da18db964c499c82/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:12,674 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:12,678 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:12,684 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:12,685 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 111352044b1bd403da18db964c499c82; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3c4b5b80, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:12,685 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:12,691 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., pid=14, masterSystemTime=1689261372624 2023-07-13 15:16:12,699 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:12,700 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:12,701 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:12,701 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261372701"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261372701"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261372701"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261372701"}]},"ts":"1689261372701"} 2023-07-13 15:16:12,710 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-13 15:16:12,710 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,41955,1689261371593 in 238 msec 2023-07-13 15:16:12,713 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE in 778 msec 2023-07-13 15:16:12,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-13 15:16:12,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33725,1689261367727] are moved back to default 2023-07-13 15:16:12,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(438): Move servers done: default => deadServerGroup 2023-07-13 15:16:12,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:12,937 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33725] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:37818 deadline: 1689261432937, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=41955 startCode=1689261371593. As of locationSeqNum=9. 2023-07-13 15:16:13,042 DEBUG [hconnection-0x609dbbf-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:13,044 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48254, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:13,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-13 15:16:13,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:13,070 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:13,072 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37820, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:13,073 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33725] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33725,1689261367727' ***** 2023-07-13 15:16:13,073 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33725] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x50aa0278 2023-07-13 15:16:13,073 INFO [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:13,080 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:13,084 INFO [RS:0;jenkins-hbase4:33725] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5a679f9e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:13,087 INFO [RS:0;jenkins-hbase4:33725] server.AbstractConnector(383): Stopped ServerConnector@25eab107{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:13,088 INFO [RS:0;jenkins-hbase4:33725] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:13,088 INFO [RS:0;jenkins-hbase4:33725] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7ea3cad5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:13,089 INFO [RS:0;jenkins-hbase4:33725] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69738833{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:13,091 INFO [RS:0;jenkins-hbase4:33725] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:13,091 INFO [RS:0;jenkins-hbase4:33725] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:13,091 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:13,091 INFO [RS:0;jenkins-hbase4:33725] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:13,091 INFO [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:13,091 DEBUG [RS:0;jenkins-hbase4:33725] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x381f9eb6 to 127.0.0.1:56695 2023-07-13 15:16:13,091 DEBUG [RS:0;jenkins-hbase4:33725] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:13,091 INFO [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33725,1689261367727; all regions closed. 2023-07-13 15:16:13,106 DEBUG [RS:0;jenkins-hbase4:33725] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:13,106 INFO [RS:0;jenkins-hbase4:33725] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33725%2C1689261367727:(num 1689261370185) 2023-07-13 15:16:13,106 DEBUG [RS:0;jenkins-hbase4:33725] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:13,107 INFO [RS:0;jenkins-hbase4:33725] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:13,107 INFO [RS:0;jenkins-hbase4:33725] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:13,107 INFO [RS:0;jenkins-hbase4:33725] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:13,107 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:13,107 INFO [RS:0;jenkins-hbase4:33725] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:13,107 INFO [RS:0;jenkins-hbase4:33725] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:13,108 INFO [RS:0;jenkins-hbase4:33725] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33725 2023-07-13 15:16:13,118 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:13,118 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:13,118 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:13,118 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:13,118 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 2023-07-13 15:16:13,118 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:13,119 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:13,118 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:13,118 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:13,121 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33725,1689261367727] 2023-07-13 15:16:13,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,121 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33725,1689261367727; numProcessing=1 2023-07-13 15:16:13,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,122 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,122 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33725,1689261367727 already deleted, retry=false 2023-07-13 15:16:13,123 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,33725,1689261367727 on jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:13,123 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,124 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,124 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 znode expired, triggering replicatorRemoved event 2023-07-13 15:16:13,126 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:13,126 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,127 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 znode expired, triggering replicatorRemoved event 2023-07-13 15:16:13,127 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,127 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,33725,1689261367727 znode expired, triggering replicatorRemoved event 2023-07-13 15:16:13,127 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,128 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,128 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,128 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,128 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,129 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,129 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,129 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,129 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,133 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,33725,1689261367727, splitWal=true, meta=false 2023-07-13 15:16:13,133 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=15 for jenkins-hbase4.apache.org,33725,1689261367727 (carryingMeta=false) jenkins-hbase4.apache.org,33725,1689261367727/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@77a3e8cc[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-13 15:16:13,134 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:13,135 WARN [RS-EventLoopGroup-5-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:33725 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:33725 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:13,137 DEBUG [RS-EventLoopGroup-5-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:33725 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:33725 2023-07-13 15:16:13,138 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=15, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,33725,1689261367727, splitWal=true, meta=false 2023-07-13 15:16:13,140 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,33725,1689261367727 had 0 regions 2023-07-13 15:16:13,141 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=15, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,33725,1689261367727, splitWal=true, meta=false, isMeta: false 2023-07-13 15:16:13,143 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33725,1689261367727-splitting 2023-07-13 15:16:13,145 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33725,1689261367727-splitting dir is empty, no logs to split. 2023-07-13 15:16:13,145 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,33725,1689261367727 WAL count=0, meta=false 2023-07-13 15:16:13,150 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33725,1689261367727-splitting dir is empty, no logs to split. 2023-07-13 15:16:13,150 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,33725,1689261367727 WAL count=0, meta=false 2023-07-13 15:16:13,150 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,33725,1689261367727 WAL splitting is done? wals=0, meta=false 2023-07-13 15:16:13,157 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,33725,1689261367727 failed, ignore...File hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33725,1689261367727-splitting does not exist. 2023-07-13 15:16:13,160 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,33725,1689261367727 after splitting done 2023-07-13 15:16:13,160 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase4.apache.org,33725,1689261367727 from processing; numProcessing=0 2023-07-13 15:16:13,163 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,33725,1689261367727, splitWal=true, meta=false in 35 msec 2023-07-13 15:16:13,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=deadServerGroup 2023-07-13 15:16:13,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:13,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:13,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:13,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:13,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:13,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:13,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:13,242 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:13,245 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48268, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:13,249 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,249 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,250 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-13 15:16:13,250 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:13,254 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 15:16:13,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-13 15:16:13,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:13,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:13,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:13,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:13,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:13,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33725] to rsgroup default 2023-07-13 15:16:13,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(258): Dropping jenkins-hbase4.apache.org:33725 during move-to-default rsgroup because not online 2023-07-13 15:16:13,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/deadServerGroup 2023-07-13 15:16:13,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:13,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group deadServerGroup, current retry=0 2023-07-13 15:16:13,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(261): All regions from [] are moved back to deadServerGroup 2023-07-13 15:16:13,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(438): Move servers done: deadServerGroup => default 2023-07-13 15:16:13,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:13,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup deadServerGroup 2023-07-13 15:16:13,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:13,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:13,293 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 15:16:13,295 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:13,295 INFO [RS:0;jenkins-hbase4:33725] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33725,1689261367727; zookeeper connection closed. 2023-07-13 15:16:13,295 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33725-0x1015f4159470001, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:13,295 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1729a173] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1729a173 2023-07-13 15:16:13,307 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:13,308 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:13,308 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:13,308 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:13,308 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:13,308 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:13,308 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:13,311 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43693 2023-07-13 15:16:13,312 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:13,313 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:13,313 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:13,315 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:13,316 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43693 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:13,319 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:436930x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:13,320 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43693-0x1015f415947000d connected 2023-07-13 15:16:13,320 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(162): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:13,321 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(162): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 15:16:13,322 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:13,324 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43693 2023-07-13 15:16:13,324 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43693 2023-07-13 15:16:13,325 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43693 2023-07-13 15:16:13,325 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43693 2023-07-13 15:16:13,325 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43693 2023-07-13 15:16:13,327 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:13,327 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:13,327 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:13,328 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:13,328 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:13,328 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:13,328 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:13,329 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 36953 2023-07-13 15:16:13,329 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:13,330 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:13,331 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78bafdff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:13,331 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:13,331 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67ca648{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:13,447 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:13,448 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:13,449 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:13,449 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:13,450 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:13,451 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1899cb6d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-36953-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7528174484194780907/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:13,453 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@9b30d54{HTTP/1.1, (http/1.1)}{0.0.0.0:36953} 2023-07-13 15:16:13,453 INFO [Listener at localhost/35161] server.Server(415): Started @13453ms 2023-07-13 15:16:13,458 INFO [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:13,460 DEBUG [RS:4;jenkins-hbase4:43693] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:13,462 DEBUG [RS:4;jenkins-hbase4:43693] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:13,462 DEBUG [RS:4;jenkins-hbase4:43693] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:13,470 DEBUG [RS:4;jenkins-hbase4:43693] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:13,471 DEBUG [RS:4;jenkins-hbase4:43693] zookeeper.ReadOnlyZKClient(139): Connect 0x53eeb546 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:13,475 DEBUG [RS:4;jenkins-hbase4:43693] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d29414b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:13,476 DEBUG [RS:4;jenkins-hbase4:43693] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5916de36, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:13,484 DEBUG [RS:4;jenkins-hbase4:43693] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase4:43693 2023-07-13 15:16:13,484 INFO [RS:4;jenkins-hbase4:43693] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:13,484 INFO [RS:4;jenkins-hbase4:43693] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:13,484 DEBUG [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:13,485 INFO [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38141,1689261365700 with isa=jenkins-hbase4.apache.org/172.31.14.131:43693, startcode=1689261373307 2023-07-13 15:16:13,485 DEBUG [RS:4;jenkins-hbase4:43693] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:13,488 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47817, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:13,488 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38141] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:13,488 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:13,488 DEBUG [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:13,489 DEBUG [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:13,489 DEBUG [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=32963 2023-07-13 15:16:13,490 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:13,490 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:13,490 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:13,490 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:13,491 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43693,1689261373307] 2023-07-13 15:16:13,491 DEBUG [RS:4;jenkins-hbase4:43693] zookeeper.ZKUtil(162): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:13,491 WARN [RS:4;jenkins-hbase4:43693] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:13,492 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,492 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,492 INFO [RS:4;jenkins-hbase4:43693] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:13,492 DEBUG [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:13,492 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,492 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,492 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,493 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,493 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,493 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,493 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,494 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:13,494 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,494 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:13,494 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:13,494 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:13,497 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38141,1689261365700] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 15:16:13,500 DEBUG [RS:4;jenkins-hbase4:43693] zookeeper.ZKUtil(162): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:13,501 DEBUG [RS:4;jenkins-hbase4:43693] zookeeper.ZKUtil(162): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:13,501 DEBUG [RS:4;jenkins-hbase4:43693] zookeeper.ZKUtil(162): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:13,502 DEBUG [RS:4;jenkins-hbase4:43693] zookeeper.ZKUtil(162): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:13,503 DEBUG [RS:4;jenkins-hbase4:43693] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:13,503 INFO [RS:4;jenkins-hbase4:43693] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:13,505 INFO [RS:4;jenkins-hbase4:43693] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:13,507 INFO [RS:4;jenkins-hbase4:43693] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:13,507 INFO [RS:4;jenkins-hbase4:43693] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:13,507 INFO [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:13,508 INFO [RS:4;jenkins-hbase4:43693] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:13,509 DEBUG [RS:4;jenkins-hbase4:43693] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:13,513 INFO [RS:4;jenkins-hbase4:43693] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:13,513 INFO [RS:4;jenkins-hbase4:43693] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:13,513 INFO [RS:4;jenkins-hbase4:43693] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:13,525 INFO [RS:4;jenkins-hbase4:43693] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:13,525 INFO [RS:4;jenkins-hbase4:43693] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43693,1689261373307-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:13,535 INFO [RS:4;jenkins-hbase4:43693] regionserver.Replication(203): jenkins-hbase4.apache.org,43693,1689261373307 started 2023-07-13 15:16:13,535 INFO [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43693,1689261373307, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43693, sessionid=0x1015f415947000d 2023-07-13 15:16:13,536 DEBUG [RS:4;jenkins-hbase4:43693] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:13,536 DEBUG [RS:4;jenkins-hbase4:43693] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:13,536 DEBUG [RS:4;jenkins-hbase4:43693] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43693,1689261373307' 2023-07-13 15:16:13,536 DEBUG [RS:4;jenkins-hbase4:43693] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:13,536 DEBUG [RS:4;jenkins-hbase4:43693] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:13,537 DEBUG [RS:4;jenkins-hbase4:43693] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:13,537 DEBUG [RS:4;jenkins-hbase4:43693] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:13,537 DEBUG [RS:4;jenkins-hbase4:43693] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:13,537 DEBUG [RS:4;jenkins-hbase4:43693] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43693,1689261373307' 2023-07-13 15:16:13,537 DEBUG [RS:4;jenkins-hbase4:43693] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:13,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:13,537 DEBUG [RS:4;jenkins-hbase4:43693] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:13,538 DEBUG [RS:4;jenkins-hbase4:43693] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:13,538 INFO [RS:4;jenkins-hbase4:43693] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:13,538 INFO [RS:4;jenkins-hbase4:43693] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:13,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:13,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:13,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:13,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:13,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 69 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34824 deadline: 1689262573553, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:13,554 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:13,556 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:13,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,557 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:13,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:13,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:13,584 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearNotProcessedDeadServer Thread=484 (was 425) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:4;jenkins-hbase4:43693 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147708220_17 at /127.0.0.1:42790 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1222586266_17 at /127.0.0.1:48938 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp254524763-728 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp254524763-730 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:4;jenkins-hbase4:43693-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x1702f866 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:36199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp254524763-733 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1936334671-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (352862075) connection to localhost/127.0.0.1:36199 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1936334671-639-acceptor-0@398d8032-ServerConnector@8ad847b{HTTP/1.1, (http/1.1)}{0.0.0.0:35539} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147708220_17 at /127.0.0.1:49004 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp254524763-729 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147708220_17 at /127.0.0.1:59100 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-1ac5a599-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x53eeb546-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046-prefix:jenkins-hbase4.apache.org,41955,1689261371593 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-metaLookup-shared--pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-850539781_17 at /127.0.0.1:42822 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp254524763-732 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1936334671-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1936334671-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147708220_17 at /127.0.0.1:59092 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x53eeb546 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1936334671-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1936334671-638 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1936334671-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x1702f866-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1936334671-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1222586266_17 at /127.0.0.1:49018 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x1702f866-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (352862075) connection to localhost/127.0.0.1:36199 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp254524763-726 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41955Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43693Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41955-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-26144534-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41955 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43693 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp254524763-731 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp254524763-727-acceptor-0@28ba90fe-ServerConnector@9b30d54{HTTP/1.1, (http/1.1)}{0.0.0.0:36953} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x53eeb546-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) - Thread LEAK? -, OpenFileDescriptor=734 (was 677) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 499), ProcessCount=172 (was 172), AvailableMemoryMB=4549 (was 4592) 2023-07-13 15:16:13,600 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=484, OpenFileDescriptor=734, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=172, AvailableMemoryMB=4548 2023-07-13 15:16:13,600 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testDefaultNamespaceCreateAndAssign 2023-07-13 15:16:13,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:13,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:13,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:13,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:13,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:13,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:13,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:13,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:13,636 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:13,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:13,641 INFO [RS:4;jenkins-hbase4:43693] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43693%2C1689261373307, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43693,1689261373307, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:13,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:13,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:13,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:13,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:13,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 97 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34824 deadline: 1689262573658, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:13,658 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:13,660 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:13,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:13,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:13,662 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:13,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:13,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:13,665 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(180): testDefaultNamespaceCreateAndAssign 2023-07-13 15:16:13,677 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:13,678 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:13,678 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:13,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'default', hbase.rsgroup.name => 'default'} 2023-07-13 15:16:13,682 INFO [RS:4;jenkins-hbase4:43693] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43693,1689261373307/jenkins-hbase4.apache.org%2C43693%2C1689261373307.1689261373643 2023-07-13 15:16:13,683 DEBUG [RS:4;jenkins-hbase4:43693] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:13,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=default 2023-07-13 15:16:13,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 15:16:13,699 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 15:16:13,701 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; ModifyNamespaceProcedure, namespace=default in 20 msec 2023-07-13 15:16:13,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 15:16:13,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:13,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=17, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:13,815 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:13,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndAssign" procId is: 17 2023-07-13 15:16:13,820 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:13,820 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:13,821 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:13,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-13 15:16:13,825 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:13,827 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:13,828 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f empty. 2023-07-13 15:16:13,828 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:13,829 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-13 15:16:13,850 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:13,851 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8bef462d0842334282195f720d7ff37f, NAME => 'Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:13,867 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:13,867 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 8bef462d0842334282195f720d7ff37f, disabling compactions & flushes 2023-07-13 15:16:13,867 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:13,867 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:13,867 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. after waiting 0 ms 2023-07-13 15:16:13,867 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:13,867 INFO [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:13,867 DEBUG [RegionOpenAndInit-Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 8bef462d0842334282195f720d7ff37f: 2023-07-13 15:16:13,871 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:13,872 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261373872"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261373872"}]},"ts":"1689261373872"} 2023-07-13 15:16:13,875 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:13,876 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:13,876 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261373876"}]},"ts":"1689261373876"} 2023-07-13 15:16:13,878 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-13 15:16:13,882 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:13,882 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:13,882 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:13,882 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:13,882 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 15:16:13,882 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:13,883 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=8bef462d0842334282195f720d7ff37f, ASSIGN}] 2023-07-13 15:16:13,885 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=8bef462d0842334282195f720d7ff37f, ASSIGN 2023-07-13 15:16:13,887 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=17, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=8bef462d0842334282195f720d7ff37f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36737,1689261368119; forceNewPlan=false, retain=false 2023-07-13 15:16:13,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-13 15:16:14,037 INFO [jenkins-hbase4:38141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:14,039 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=8bef462d0842334282195f720d7ff37f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:14,039 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261374039"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261374039"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261374039"}]},"ts":"1689261374039"} 2023-07-13 15:16:14,042 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE; OpenRegionProcedure 8bef462d0842334282195f720d7ff37f, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:14,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-13 15:16:14,196 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:14,197 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:14,200 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37124, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:14,205 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:14,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8bef462d0842334282195f720d7ff37f, NAME => 'Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:14,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:14,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,207 INFO [StoreOpener-8bef462d0842334282195f720d7ff37f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,209 DEBUG [StoreOpener-8bef462d0842334282195f720d7ff37f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f/f 2023-07-13 15:16:14,209 DEBUG [StoreOpener-8bef462d0842334282195f720d7ff37f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f/f 2023-07-13 15:16:14,210 INFO [StoreOpener-8bef462d0842334282195f720d7ff37f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8bef462d0842334282195f720d7ff37f columnFamilyName f 2023-07-13 15:16:14,211 INFO [StoreOpener-8bef462d0842334282195f720d7ff37f-1] regionserver.HStore(310): Store=8bef462d0842334282195f720d7ff37f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:14,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:14,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8bef462d0842334282195f720d7ff37f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12059691360, jitterRate=0.12314628064632416}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:14,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8bef462d0842334282195f720d7ff37f: 2023-07-13 15:16:14,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f., pid=19, masterSystemTime=1689261374196 2023-07-13 15:16:14,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:14,230 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:14,231 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=8bef462d0842334282195f720d7ff37f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:14,231 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261374231"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261374231"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261374231"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261374231"}]},"ts":"1689261374231"} 2023-07-13 15:16:14,236 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=18 2023-07-13 15:16:14,236 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; OpenRegionProcedure 8bef462d0842334282195f720d7ff37f, server=jenkins-hbase4.apache.org,36737,1689261368119 in 191 msec 2023-07-13 15:16:14,239 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-13 15:16:14,239 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=8bef462d0842334282195f720d7ff37f, ASSIGN in 354 msec 2023-07-13 15:16:14,240 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:14,240 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261374240"}]},"ts":"1689261374240"} 2023-07-13 15:16:14,242 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-13 15:16:14,245 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=17, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:14,247 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign in 434 msec 2023-07-13 15:16:14,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=17 2023-07-13 15:16:14,429 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndAssign, procId: 17 completed 2023-07-13 15:16:14,429 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:14,433 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:14,435 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52242, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:14,438 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:14,440 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37126, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:14,441 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:14,442 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48272, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:14,443 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:14,444 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53414, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:14,448 INFO [Listener at localhost/35161] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndAssign 2023-07-13 15:16:14,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateAndAssign 2023-07-13 15:16:14,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:14,464 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261374464"}]},"ts":"1689261374464"} 2023-07-13 15:16:14,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 15:16:14,466 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-13 15:16:14,469 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCreateAndAssign to state=DISABLING 2023-07-13 15:16:14,471 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=8bef462d0842334282195f720d7ff37f, UNASSIGN}] 2023-07-13 15:16:14,473 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=8bef462d0842334282195f720d7ff37f, UNASSIGN 2023-07-13 15:16:14,474 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=8bef462d0842334282195f720d7ff37f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:14,474 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261374474"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261374474"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261374474"}]},"ts":"1689261374474"} 2023-07-13 15:16:14,476 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 8bef462d0842334282195f720d7ff37f, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:14,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 15:16:14,630 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8bef462d0842334282195f720d7ff37f, disabling compactions & flushes 2023-07-13 15:16:14,631 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:14,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:14,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. after waiting 0 ms 2023-07-13 15:16:14,632 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:14,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:14,646 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f. 2023-07-13 15:16:14,646 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8bef462d0842334282195f720d7ff37f: 2023-07-13 15:16:14,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,650 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=8bef462d0842334282195f720d7ff37f, regionState=CLOSED 2023-07-13 15:16:14,650 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689261374650"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261374650"}]},"ts":"1689261374650"} 2023-07-13 15:16:14,654 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-13 15:16:14,655 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 8bef462d0842334282195f720d7ff37f, server=jenkins-hbase4.apache.org,36737,1689261368119 in 176 msec 2023-07-13 15:16:14,657 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-13 15:16:14,657 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndAssign, region=8bef462d0842334282195f720d7ff37f, UNASSIGN in 184 msec 2023-07-13 15:16:14,658 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261374658"}]},"ts":"1689261374658"} 2023-07-13 15:16:14,660 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-13 15:16:14,662 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCreateAndAssign to state=DISABLED 2023-07-13 15:16:14,668 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign in 209 msec 2023-07-13 15:16:14,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 15:16:14,770 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndAssign, procId: 20 completed 2023-07-13 15:16:14,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateAndAssign 2023-07-13 15:16:14,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:14,788 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:14,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndAssign' from rsgroup 'default' 2023-07-13 15:16:14,790 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:14,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:14,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:14,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:14,797 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-13 15:16:14,803 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f/recovered.edits] 2023-07-13 15:16:14,812 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f/recovered.edits/4.seqid 2023-07-13 15:16:14,813 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndAssign/8bef462d0842334282195f720d7ff37f 2023-07-13 15:16:14,813 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndAssign regions 2023-07-13 15:16:14,818 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:14,845 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndAssign from hbase:meta 2023-07-13 15:16:14,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-13 15:16:14,905 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndAssign' descriptor. 2023-07-13 15:16:14,907 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:14,907 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndAssign' from region states. 2023-07-13 15:16:14,908 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261374908"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:14,914 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:14,915 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8bef462d0842334282195f720d7ff37f, NAME => 'Group_testCreateAndAssign,,1689261373809.8bef462d0842334282195f720d7ff37f.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:14,915 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndAssign' as deleted. 2023-07-13 15:16:14,915 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261374915"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:14,918 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndAssign state from META 2023-07-13 15:16:14,921 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:14,923 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign in 143 msec 2023-07-13 15:16:15,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-13 15:16:15,106 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndAssign, procId: 23 completed 2023-07-13 15:16:15,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:15,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:15,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:15,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:15,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:15,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:15,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:15,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:15,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:15,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:15,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:15,125 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:15,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:15,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:15,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:15,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:15,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:15,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:15,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:15,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:15,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:15,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 163 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262575141, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:15,142 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:15,143 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:15,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:15,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:15,145 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:15,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:15,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:15,166 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testDefaultNamespaceCreateAndAssign Thread=497 (was 484) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1366339281_17 at /127.0.0.1:49022 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741842_1018] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741842_1018, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1366339281_17 at /127.0.0.1:42834 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741842_1018] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:36199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741842_1018, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1366339281_17 at /127.0.0.1:59118 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741842_1018] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046-prefix:jenkins-hbase4.apache.org,43693,1689261373307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294184884_17 at /127.0.0.1:59092 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741842_1018, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=759 (was 734) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 499), ProcessCount=172 (was 172), AvailableMemoryMB=4508 (was 4548) 2023-07-13 15:16:15,183 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=497, OpenFileDescriptor=759, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=172, AvailableMemoryMB=4506 2023-07-13 15:16:15,183 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testCreateMultiRegion 2023-07-13 15:16:15,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:15,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:15,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:15,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:15,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:15,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:15,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:15,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:15,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:15,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:15,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:15,204 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:15,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:15,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:15,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:15,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:15,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:15,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:15,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:15,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:15,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:15,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 191 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262575220, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:15,221 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:15,222 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:15,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:15,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:15,223 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:15,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:15,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:15,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:15,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:15,231 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:15,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateMultiRegion" procId is: 24 2023-07-13 15:16:15,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 15:16:15,234 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:15,235 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:15,235 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:15,244 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:15,255 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645 2023-07-13 15:16:15,255 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:15,255 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:15,255 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:15,255 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:15,255 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f 2023-07-13 15:16:15,255 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:15,255 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:15,256 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645 empty. 2023-07-13 15:16:15,256 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298 empty. 2023-07-13 15:16:15,257 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d empty. 2023-07-13 15:16:15,257 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e empty. 2023-07-13 15:16:15,257 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124 empty. 2023-07-13 15:16:15,257 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f empty. 2023-07-13 15:16:15,257 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea empty. 2023-07-13 15:16:15,258 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5 empty. 2023-07-13 15:16:15,273 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:15,273 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645 2023-07-13 15:16:15,273 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:15,273 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f 2023-07-13 15:16:15,273 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:15,273 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:15,273 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:15,274 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:15,275 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc empty. 2023-07-13 15:16:15,275 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b empty. 2023-07-13 15:16:15,276 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:15,276 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:15,279 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:15,280 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:15,280 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-13 15:16:15,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 15:16:15,337 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:15,339 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5d388672e5b8071d72e554eab3e1e298, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,339 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 782714c9895592e9a14e4144491fc645, NAME => 'Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,351 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 8ac5168029360d92f05ba4adf26d125e, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,472 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,474 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 782714c9895592e9a14e4144491fc645, disabling compactions & flushes 2023-07-13 15:16:15,474 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:15,474 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:15,474 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. after waiting 0 ms 2023-07-13 15:16:15,474 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:15,474 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:15,474 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 782714c9895592e9a14e4144491fc645: 2023-07-13 15:16:15,475 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => cce925c7292c9863994eb4ffb8b4bdd5, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,478 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,479 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 5d388672e5b8071d72e554eab3e1e298, disabling compactions & flushes 2023-07-13 15:16:15,480 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:15,480 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:15,480 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. after waiting 0 ms 2023-07-13 15:16:15,480 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:15,480 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:15,480 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 5d388672e5b8071d72e554eab3e1e298: 2023-07-13 15:16:15,481 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(7675): creating {ENCODED => 3faf77970ff176e6c1dec41c397f3124, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,503 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,504 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 8ac5168029360d92f05ba4adf26d125e, disabling compactions & flushes 2023-07-13 15:16:15,504 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:15,504 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:15,504 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. after waiting 0 ms 2023-07-13 15:16:15,504 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:15,504 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:15,504 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 8ac5168029360d92f05ba4adf26d125e: 2023-07-13 15:16:15,505 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => a98098827adf98b2694320eec92db17f, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 15:16:15,569 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,572 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing cce925c7292c9863994eb4ffb8b4bdd5, disabling compactions & flushes 2023-07-13 15:16:15,572 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:15,572 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:15,573 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. after waiting 0 ms 2023-07-13 15:16:15,573 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:15,573 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:15,573 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for cce925c7292c9863994eb4ffb8b4bdd5: 2023-07-13 15:16:15,574 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => fd48e3c2a0303f6a03e03951e4f75f1d, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,581 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,581 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing a98098827adf98b2694320eec92db17f, disabling compactions & flushes 2023-07-13 15:16:15,581 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:15,581 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:15,581 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. after waiting 0 ms 2023-07-13 15:16:15,581 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:15,582 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:15,582 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for a98098827adf98b2694320eec92db17f: 2023-07-13 15:16:15,582 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 3252e192329bbf43185c413b7aaaccea, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,616 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,617 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing fd48e3c2a0303f6a03e03951e4f75f1d, disabling compactions & flushes 2023-07-13 15:16:15,617 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:15,617 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:15,617 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. after waiting 0 ms 2023-07-13 15:16:15,617 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:15,617 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:15,617 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for fd48e3c2a0303f6a03e03951e4f75f1d: 2023-07-13 15:16:15,618 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4b10da3444497190f82aebe11c15260b, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,619 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,620 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 3252e192329bbf43185c413b7aaaccea, disabling compactions & flushes 2023-07-13 15:16:15,620 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:15,620 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:15,620 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. after waiting 0 ms 2023-07-13 15:16:15,620 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:15,620 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:15,620 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 3252e192329bbf43185c413b7aaaccea: 2023-07-13 15:16:15,621 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9651487d9ba599547c9fb995a3d301dc, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, tableDescriptor='Group_testCreateMultiRegion', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:15,639 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,642 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1604): Closing 4b10da3444497190f82aebe11c15260b, disabling compactions & flushes 2023-07-13 15:16:15,642 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:15,642 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:15,642 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. after waiting 0 ms 2023-07-13 15:16:15,642 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:15,643 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:15,643 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-0] regionserver.HRegion(1558): Region close journal for 4b10da3444497190f82aebe11c15260b: 2023-07-13 15:16:15,656 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,656 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1604): Closing 9651487d9ba599547c9fb995a3d301dc, disabling compactions & flushes 2023-07-13 15:16:15,656 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:15,656 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:15,656 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. after waiting 0 ms 2023-07-13 15:16:15,656 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:15,656 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:15,656 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-2] regionserver.HRegion(1558): Region close journal for 9651487d9ba599547c9fb995a3d301dc: 2023-07-13 15:16:15,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 15:16:15,951 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:15,951 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1604): Closing 3faf77970ff176e6c1dec41c397f3124, disabling compactions & flushes 2023-07-13 15:16:15,951 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:15,951 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:15,951 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. after waiting 0 ms 2023-07-13 15:16:15,951 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:15,951 INFO [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:15,951 DEBUG [RegionOpenAndInit-Group_testCreateMultiRegion-pool-1] regionserver.HRegion(1558): Region close journal for 3faf77970ff176e6c1dec41c397f3124: 2023-07-13 15:16:15,959 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:15,960 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,960 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689261375227.8ac5168029360d92f05ba4adf26d125e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689261375227.a98098827adf98b2694320eec92db17f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,961 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261375960"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261375960"}]},"ts":"1689261375960"} 2023-07-13 15:16:15,967 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 10 regions to meta. 2023-07-13 15:16:15,975 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:15,975 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261375975"}]},"ts":"1689261375975"} 2023-07-13 15:16:15,979 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLING in hbase:meta 2023-07-13 15:16:15,982 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:15,983 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:15,983 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:15,983 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:15,983 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 15:16:15,983 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:15,984 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=782714c9895592e9a14e4144491fc645, ASSIGN}, {pid=26, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5d388672e5b8071d72e554eab3e1e298, ASSIGN}, {pid=27, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8ac5168029360d92f05ba4adf26d125e, ASSIGN}, {pid=28, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cce925c7292c9863994eb4ffb8b4bdd5, ASSIGN}, {pid=29, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3faf77970ff176e6c1dec41c397f3124, ASSIGN}, {pid=30, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a98098827adf98b2694320eec92db17f, ASSIGN}, {pid=31, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=fd48e3c2a0303f6a03e03951e4f75f1d, ASSIGN}, {pid=32, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3252e192329bbf43185c413b7aaaccea, ASSIGN}, {pid=33, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4b10da3444497190f82aebe11c15260b, ASSIGN}, {pid=34, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9651487d9ba599547c9fb995a3d301dc, ASSIGN}] 2023-07-13 15:16:15,987 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9651487d9ba599547c9fb995a3d301dc, ASSIGN 2023-07-13 15:16:15,987 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3252e192329bbf43185c413b7aaaccea, ASSIGN 2023-07-13 15:16:15,988 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=fd48e3c2a0303f6a03e03951e4f75f1d, ASSIGN 2023-07-13 15:16:15,988 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4b10da3444497190f82aebe11c15260b, ASSIGN 2023-07-13 15:16:15,990 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=34, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9651487d9ba599547c9fb995a3d301dc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43693,1689261373307; forceNewPlan=false, retain=false 2023-07-13 15:16:15,990 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3252e192329bbf43185c413b7aaaccea, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36737,1689261368119; forceNewPlan=false, retain=false 2023-07-13 15:16:15,990 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a98098827adf98b2694320eec92db17f, ASSIGN 2023-07-13 15:16:15,991 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=31, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=fd48e3c2a0303f6a03e03951e4f75f1d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34275,1689261367926; forceNewPlan=false, retain=false 2023-07-13 15:16:15,991 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=33, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4b10da3444497190f82aebe11c15260b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41955,1689261371593; forceNewPlan=false, retain=false 2023-07-13 15:16:15,992 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3faf77970ff176e6c1dec41c397f3124, ASSIGN 2023-07-13 15:16:15,992 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cce925c7292c9863994eb4ffb8b4bdd5, ASSIGN 2023-07-13 15:16:15,996 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5d388672e5b8071d72e554eab3e1e298, ASSIGN 2023-07-13 15:16:15,996 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=28, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cce925c7292c9863994eb4ffb8b4bdd5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41955,1689261371593; forceNewPlan=false, retain=false 2023-07-13 15:16:15,996 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8ac5168029360d92f05ba4adf26d125e, ASSIGN 2023-07-13 15:16:15,996 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=29, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3faf77970ff176e6c1dec41c397f3124, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34275,1689261367926; forceNewPlan=false, retain=false 2023-07-13 15:16:15,999 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=30, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a98098827adf98b2694320eec92db17f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43693,1689261373307; forceNewPlan=false, retain=false 2023-07-13 15:16:15,999 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5d388672e5b8071d72e554eab3e1e298, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34275,1689261367926; forceNewPlan=false, retain=false 2023-07-13 15:16:15,999 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=27, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8ac5168029360d92f05ba4adf26d125e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36737,1689261368119; forceNewPlan=false, retain=false 2023-07-13 15:16:15,999 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=782714c9895592e9a14e4144491fc645, ASSIGN 2023-07-13 15:16:16,001 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=782714c9895592e9a14e4144491fc645, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43693,1689261373307; forceNewPlan=false, retain=false 2023-07-13 15:16:16,107 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:16,140 INFO [jenkins-hbase4:38141] balancer.BaseLoadBalancer(1545): Reassigned 10 regions. 10 retained the pre-restart assignment. 2023-07-13 15:16:16,148 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=9651487d9ba599547c9fb995a3d301dc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:16,148 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=cce925c7292c9863994eb4ffb8b4bdd5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:16,148 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=4b10da3444497190f82aebe11c15260b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:16,148 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261376148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376148"}]},"ts":"1689261376148"} 2023-07-13 15:16:16,148 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=a98098827adf98b2694320eec92db17f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:16,148 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=782714c9895592e9a14e4144491fc645, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:16,148 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689261375227.a98098827adf98b2694320eec92db17f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376148"}]},"ts":"1689261376148"} 2023-07-13 15:16:16,148 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376148"}]},"ts":"1689261376148"} 2023-07-13 15:16:16,148 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376148"}]},"ts":"1689261376148"} 2023-07-13 15:16:16,148 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261376148"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376148"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376148"}]},"ts":"1689261376148"} 2023-07-13 15:16:16,152 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=34, state=RUNNABLE; OpenRegionProcedure 9651487d9ba599547c9fb995a3d301dc, server=jenkins-hbase4.apache.org,43693,1689261373307}] 2023-07-13 15:16:16,154 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=30, state=RUNNABLE; OpenRegionProcedure a98098827adf98b2694320eec92db17f, server=jenkins-hbase4.apache.org,43693,1689261373307}] 2023-07-13 15:16:16,158 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; OpenRegionProcedure 4b10da3444497190f82aebe11c15260b, server=jenkins-hbase4.apache.org,41955,1689261371593}] 2023-07-13 15:16:16,160 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=28, state=RUNNABLE; OpenRegionProcedure cce925c7292c9863994eb4ffb8b4bdd5, server=jenkins-hbase4.apache.org,41955,1689261371593}] 2023-07-13 15:16:16,161 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=25, state=RUNNABLE; OpenRegionProcedure 782714c9895592e9a14e4144491fc645, server=jenkins-hbase4.apache.org,43693,1689261373307}] 2023-07-13 15:16:16,162 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=3252e192329bbf43185c413b7aaaccea, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:16,162 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376162"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376162"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376162"}]},"ts":"1689261376162"} 2023-07-13 15:16:16,165 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=32, state=RUNNABLE; OpenRegionProcedure 3252e192329bbf43185c413b7aaaccea, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:16,167 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=8ac5168029360d92f05ba4adf26d125e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:16,168 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689261375227.8ac5168029360d92f05ba4adf26d125e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376167"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376167"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376167"}]},"ts":"1689261376167"} 2023-07-13 15:16:16,170 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=fd48e3c2a0303f6a03e03951e4f75f1d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:16,170 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376170"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376170"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376170"}]},"ts":"1689261376170"} 2023-07-13 15:16:16,172 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=27, state=RUNNABLE; OpenRegionProcedure 8ac5168029360d92f05ba4adf26d125e, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:16,173 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=3faf77970ff176e6c1dec41c397f3124, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:16,173 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5d388672e5b8071d72e554eab3e1e298, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:16,173 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376173"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376173"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376173"}]},"ts":"1689261376173"} 2023-07-13 15:16:16,173 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376173"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261376173"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261376173"}]},"ts":"1689261376173"} 2023-07-13 15:16:16,175 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=31, state=RUNNABLE; OpenRegionProcedure fd48e3c2a0303f6a03e03951e4f75f1d, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:16,179 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=29, state=RUNNABLE; OpenRegionProcedure 3faf77970ff176e6c1dec41c397f3124, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:16,185 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=26, state=RUNNABLE; OpenRegionProcedure 5d388672e5b8071d72e554eab3e1e298, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:16,200 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 15:16:16,201 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 15:16:16,203 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:16,203 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-13 15:16:16,203 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:16,203 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-13 15:16:16,204 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:16,204 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-13 15:16:16,307 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:16,307 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:16,309 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53420, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:16,319 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:16,319 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a98098827adf98b2694320eec92db17f, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'} 2023-07-13 15:16:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4b10da3444497190f82aebe11c15260b, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'} 2023-07-13 15:16:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion a98098827adf98b2694320eec92db17f 2023-07-13 15:16:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a98098827adf98b2694320eec92db17f 2023-07-13 15:16:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:16,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a98098827adf98b2694320eec92db17f 2023-07-13 15:16:16,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:16,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:16,322 INFO [StoreOpener-4b10da3444497190f82aebe11c15260b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:16,324 DEBUG [StoreOpener-4b10da3444497190f82aebe11c15260b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b/f 2023-07-13 15:16:16,324 DEBUG [StoreOpener-4b10da3444497190f82aebe11c15260b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b/f 2023-07-13 15:16:16,324 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:16,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3252e192329bbf43185c413b7aaaccea, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'} 2023-07-13 15:16:16,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:16,324 INFO [StoreOpener-4b10da3444497190f82aebe11c15260b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4b10da3444497190f82aebe11c15260b columnFamilyName f 2023-07-13 15:16:16,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:16,324 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:16,325 INFO [StoreOpener-4b10da3444497190f82aebe11c15260b-1] regionserver.HStore(310): Store=4b10da3444497190f82aebe11c15260b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:16,326 INFO [StoreOpener-3252e192329bbf43185c413b7aaaccea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:16,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:16,328 DEBUG [StoreOpener-3252e192329bbf43185c413b7aaaccea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea/f 2023-07-13 15:16:16,328 DEBUG [StoreOpener-3252e192329bbf43185c413b7aaaccea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea/f 2023-07-13 15:16:16,328 INFO [StoreOpener-3252e192329bbf43185c413b7aaaccea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3252e192329bbf43185c413b7aaaccea columnFamilyName f 2023-07-13 15:16:16,329 INFO [StoreOpener-3252e192329bbf43185c413b7aaaccea-1] regionserver.HStore(310): Store=3252e192329bbf43185c413b7aaaccea/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:16,330 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:16,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:16,331 INFO [StoreOpener-a98098827adf98b2694320eec92db17f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a98098827adf98b2694320eec92db17f 2023-07-13 15:16:16,333 DEBUG [StoreOpener-a98098827adf98b2694320eec92db17f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f/f 2023-07-13 15:16:16,333 DEBUG [StoreOpener-a98098827adf98b2694320eec92db17f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f/f 2023-07-13 15:16:16,334 INFO [StoreOpener-a98098827adf98b2694320eec92db17f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a98098827adf98b2694320eec92db17f columnFamilyName f 2023-07-13 15:16:16,335 INFO [StoreOpener-a98098827adf98b2694320eec92db17f-1] regionserver.HStore(310): Store=a98098827adf98b2694320eec92db17f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,335 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:16,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5d388672e5b8071d72e554eab3e1e298, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('} 2023-07-13 15:16:16,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:16,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:16,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:16,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:16,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f 2023-07-13 15:16:16,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4b10da3444497190f82aebe11c15260b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10615638720, jitterRate=-0.011341601610183716}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4b10da3444497190f82aebe11c15260b: 2023-07-13 15:16:16,340 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f 2023-07-13 15:16:16,340 INFO [StoreOpener-5d388672e5b8071d72e554eab3e1e298-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:16,343 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b., pid=37, masterSystemTime=1689261376314 2023-07-13 15:16:16,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,345 DEBUG [StoreOpener-5d388672e5b8071d72e554eab3e1e298-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298/f 2023-07-13 15:16:16,345 DEBUG [StoreOpener-5d388672e5b8071d72e554eab3e1e298-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298/f 2023-07-13 15:16:16,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3252e192329bbf43185c413b7aaaccea; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10283988480, jitterRate=-0.04222893714904785}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3252e192329bbf43185c413b7aaaccea: 2023-07-13 15:16:16,346 INFO [StoreOpener-5d388672e5b8071d72e554eab3e1e298-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5d388672e5b8071d72e554eab3e1e298 columnFamilyName f 2023-07-13 15:16:16,347 INFO [StoreOpener-5d388672e5b8071d72e554eab3e1e298-1] regionserver.HStore(310): Store=5d388672e5b8071d72e554eab3e1e298/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,347 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=4b10da3444497190f82aebe11c15260b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:16,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a98098827adf98b2694320eec92db17f 2023-07-13 15:16:16,347 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376347"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376347"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376347"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376347"}]},"ts":"1689261376347"} 2023-07-13 15:16:16,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:16,348 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:16,350 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 15:16:16,351 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a98098827adf98b2694320eec92db17f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10734588800, jitterRate=-2.6351213455200195E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,351 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a98098827adf98b2694320eec92db17f: 2023-07-13 15:16:16,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea., pid=40, masterSystemTime=1689261376320 2023-07-13 15:16:16,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:16,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:16,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:16,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cce925c7292c9863994eb4ffb8b4bdd5, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'} 2023-07-13 15:16:16,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:16,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:16,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:16,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:16,354 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-13 15:16:16,354 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; OpenRegionProcedure 4b10da3444497190f82aebe11c15260b, server=jenkins-hbase4.apache.org,41955,1689261371593 in 191 msec 2023-07-13 15:16:16,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:16,354 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:16,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:16,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8ac5168029360d92f05ba4adf26d125e, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'} 2023-07-13 15:16:16,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:16,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:16,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:16,356 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=3252e192329bbf43185c413b7aaaccea, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:16,357 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376356"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376356"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376356"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376356"}]},"ts":"1689261376356"} 2023-07-13 15:16:16,358 INFO [StoreOpener-8ac5168029360d92f05ba4adf26d125e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:16,359 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4b10da3444497190f82aebe11c15260b, ASSIGN in 370 msec 2023-07-13 15:16:16,360 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,360 INFO [StoreOpener-cce925c7292c9863994eb4ffb8b4bdd5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:16,361 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5d388672e5b8071d72e554eab3e1e298; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10188998560, jitterRate=-0.051075562834739685}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5d388672e5b8071d72e554eab3e1e298: 2023-07-13 15:16:16,362 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f., pid=36, masterSystemTime=1689261376307 2023-07-13 15:16:16,363 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298., pid=44, masterSystemTime=1689261376328 2023-07-13 15:16:16,363 DEBUG [StoreOpener-8ac5168029360d92f05ba4adf26d125e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e/f 2023-07-13 15:16:16,363 DEBUG [StoreOpener-8ac5168029360d92f05ba4adf26d125e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e/f 2023-07-13 15:16:16,368 DEBUG [StoreOpener-cce925c7292c9863994eb4ffb8b4bdd5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5/f 2023-07-13 15:16:16,369 DEBUG [StoreOpener-cce925c7292c9863994eb4ffb8b4bdd5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5/f 2023-07-13 15:16:16,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=32 2023-07-13 15:16:16,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=32, state=SUCCESS; OpenRegionProcedure 3252e192329bbf43185c413b7aaaccea, server=jenkins-hbase4.apache.org,36737,1689261368119 in 196 msec 2023-07-13 15:16:16,370 INFO [StoreOpener-8ac5168029360d92f05ba4adf26d125e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8ac5168029360d92f05ba4adf26d125e columnFamilyName f 2023-07-13 15:16:16,370 INFO [StoreOpener-cce925c7292c9863994eb4ffb8b4bdd5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cce925c7292c9863994eb4ffb8b4bdd5 columnFamilyName f 2023-07-13 15:16:16,370 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:16,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:16,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:16,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9651487d9ba599547c9fb995a3d301dc, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''} 2023-07-13 15:16:16,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:16,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:16,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:16,374 INFO [StoreOpener-cce925c7292c9863994eb4ffb8b4bdd5-1] regionserver.HStore(310): Store=cce925c7292c9863994eb4ffb8b4bdd5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,374 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=a98098827adf98b2694320eec92db17f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:16,374 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689261375227.a98098827adf98b2694320eec92db17f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376374"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376374"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376374"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376374"}]},"ts":"1689261376374"} 2023-07-13 15:16:16,374 INFO [StoreOpener-8ac5168029360d92f05ba4adf26d125e-1] regionserver.HStore(310): Store=8ac5168029360d92f05ba4adf26d125e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:16,375 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:16,375 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:16,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fd48e3c2a0303f6a03e03951e4f75f1d, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'} 2023-07-13 15:16:16,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:16,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:16,375 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:16,375 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3252e192329bbf43185c413b7aaaccea, ASSIGN in 385 msec 2023-07-13 15:16:16,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:16,376 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:16,377 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5d388672e5b8071d72e554eab3e1e298, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:16,377 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376377"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376377"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376377"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376377"}]},"ts":"1689261376377"} 2023-07-13 15:16:16,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:16,377 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:16,383 INFO [StoreOpener-9651487d9ba599547c9fb995a3d301dc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:16,383 INFO [StoreOpener-fd48e3c2a0303f6a03e03951e4f75f1d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:16,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:16,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:16,387 DEBUG [StoreOpener-fd48e3c2a0303f6a03e03951e4f75f1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d/f 2023-07-13 15:16:16,387 DEBUG [StoreOpener-fd48e3c2a0303f6a03e03951e4f75f1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d/f 2023-07-13 15:16:16,387 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=30 2023-07-13 15:16:16,387 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=30, state=SUCCESS; OpenRegionProcedure a98098827adf98b2694320eec92db17f, server=jenkins-hbase4.apache.org,43693,1689261373307 in 225 msec 2023-07-13 15:16:16,388 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,388 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=26 2023-07-13 15:16:16,389 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=26, state=SUCCESS; OpenRegionProcedure 5d388672e5b8071d72e554eab3e1e298, server=jenkins-hbase4.apache.org,34275,1689261367926 in 198 msec 2023-07-13 15:16:16,389 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8ac5168029360d92f05ba4adf26d125e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11569611680, jitterRate=0.07750405371189117}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8ac5168029360d92f05ba4adf26d125e: 2023-07-13 15:16:16,390 DEBUG [StoreOpener-9651487d9ba599547c9fb995a3d301dc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc/f 2023-07-13 15:16:16,390 DEBUG [StoreOpener-9651487d9ba599547c9fb995a3d301dc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc/f 2023-07-13 15:16:16,390 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e., pid=41, masterSystemTime=1689261376320 2023-07-13 15:16:16,390 INFO [StoreOpener-9651487d9ba599547c9fb995a3d301dc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9651487d9ba599547c9fb995a3d301dc columnFamilyName f 2023-07-13 15:16:16,391 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a98098827adf98b2694320eec92db17f, ASSIGN in 404 msec 2023-07-13 15:16:16,391 INFO [StoreOpener-fd48e3c2a0303f6a03e03951e4f75f1d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fd48e3c2a0303f6a03e03951e4f75f1d columnFamilyName f 2023-07-13 15:16:16,391 INFO [StoreOpener-9651487d9ba599547c9fb995a3d301dc-1] regionserver.HStore(310): Store=9651487d9ba599547c9fb995a3d301dc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,391 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5d388672e5b8071d72e554eab3e1e298, ASSIGN in 405 msec 2023-07-13 15:16:16,392 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:16,392 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:16,392 INFO [StoreOpener-fd48e3c2a0303f6a03e03951e4f75f1d-1] regionserver.HStore(310): Store=fd48e3c2a0303f6a03e03951e4f75f1d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,393 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=8ac5168029360d92f05ba4adf26d125e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:16,393 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689261375227.8ac5168029360d92f05ba4adf26d125e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376392"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376392"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376392"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376392"}]},"ts":"1689261376392"} 2023-07-13 15:16:16,397 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=27 2023-07-13 15:16:16,397 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=27, state=SUCCESS; OpenRegionProcedure 8ac5168029360d92f05ba4adf26d125e, server=jenkins-hbase4.apache.org,36737,1689261368119 in 223 msec 2023-07-13 15:16:16,399 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8ac5168029360d92f05ba4adf26d125e, ASSIGN in 413 msec 2023-07-13 15:16:16,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:16,409 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:16,410 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cce925c7292c9863994eb4ffb8b4bdd5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10004456800, jitterRate=-0.06826235353946686}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cce925c7292c9863994eb4ffb8b4bdd5: 2023-07-13 15:16:16,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:16,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:16,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5., pid=38, masterSystemTime=1689261376314 2023-07-13 15:16:16,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:16,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:16,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:16,431 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=cce925c7292c9863994eb4ffb8b4bdd5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:16,432 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376431"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376431"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376431"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376431"}]},"ts":"1689261376431"} 2023-07-13 15:16:16,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:16,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=28 2023-07-13 15:16:16,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=28, state=SUCCESS; OpenRegionProcedure cce925c7292c9863994eb4ffb8b4bdd5, server=jenkins-hbase4.apache.org,41955,1689261371593 in 275 msec 2023-07-13 15:16:16,441 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cce925c7292c9863994eb4ffb8b4bdd5, ASSIGN in 455 msec 2023-07-13 15:16:16,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,448 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fd48e3c2a0303f6a03e03951e4f75f1d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11533005600, jitterRate=0.07409484684467316}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fd48e3c2a0303f6a03e03951e4f75f1d: 2023-07-13 15:16:16,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d., pid=42, masterSystemTime=1689261376328 2023-07-13 15:16:16,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,452 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9651487d9ba599547c9fb995a3d301dc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11512383360, jitterRate=0.07217425107955933}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9651487d9ba599547c9fb995a3d301dc: 2023-07-13 15:16:16,453 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc., pid=35, masterSystemTime=1689261376307 2023-07-13 15:16:16,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:16,454 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:16,454 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:16,454 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=fd48e3c2a0303f6a03e03951e4f75f1d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:16,455 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376454"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376454"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376454"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376454"}]},"ts":"1689261376454"} 2023-07-13 15:16:16,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3faf77970ff176e6c1dec41c397f3124, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'} 2023-07-13 15:16:16,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:16,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:16,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:16,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:16,457 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:16,457 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:16,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 782714c9895592e9a14e4144491fc645, NAME => 'Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'} 2023-07-13 15:16:16,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateMultiRegion 782714c9895592e9a14e4144491fc645 2023-07-13 15:16:16,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:16,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 782714c9895592e9a14e4144491fc645 2023-07-13 15:16:16,458 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 782714c9895592e9a14e4144491fc645 2023-07-13 15:16:16,459 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=9651487d9ba599547c9fb995a3d301dc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:16,459 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261376459"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376459"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376459"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376459"}]},"ts":"1689261376459"} 2023-07-13 15:16:16,463 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=31 2023-07-13 15:16:16,463 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=31, state=SUCCESS; OpenRegionProcedure fd48e3c2a0303f6a03e03951e4f75f1d, server=jenkins-hbase4.apache.org,34275,1689261367926 in 282 msec 2023-07-13 15:16:16,466 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=fd48e3c2a0303f6a03e03951e4f75f1d, ASSIGN in 479 msec 2023-07-13 15:16:16,467 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=34 2023-07-13 15:16:16,467 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=34, state=SUCCESS; OpenRegionProcedure 9651487d9ba599547c9fb995a3d301dc, server=jenkins-hbase4.apache.org,43693,1689261373307 in 311 msec 2023-07-13 15:16:16,469 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9651487d9ba599547c9fb995a3d301dc, ASSIGN in 483 msec 2023-07-13 15:16:16,471 INFO [StoreOpener-782714c9895592e9a14e4144491fc645-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 782714c9895592e9a14e4144491fc645 2023-07-13 15:16:16,473 INFO [StoreOpener-3faf77970ff176e6c1dec41c397f3124-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:16,476 DEBUG [StoreOpener-782714c9895592e9a14e4144491fc645-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645/f 2023-07-13 15:16:16,476 DEBUG [StoreOpener-782714c9895592e9a14e4144491fc645-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645/f 2023-07-13 15:16:16,477 INFO [StoreOpener-782714c9895592e9a14e4144491fc645-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 782714c9895592e9a14e4144491fc645 columnFamilyName f 2023-07-13 15:16:16,477 DEBUG [StoreOpener-3faf77970ff176e6c1dec41c397f3124-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124/f 2023-07-13 15:16:16,477 DEBUG [StoreOpener-3faf77970ff176e6c1dec41c397f3124-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124/f 2023-07-13 15:16:16,478 INFO [StoreOpener-3faf77970ff176e6c1dec41c397f3124-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3faf77970ff176e6c1dec41c397f3124 columnFamilyName f 2023-07-13 15:16:16,478 INFO [StoreOpener-3faf77970ff176e6c1dec41c397f3124-1] regionserver.HStore(310): Store=3faf77970ff176e6c1dec41c397f3124/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,479 INFO [StoreOpener-782714c9895592e9a14e4144491fc645-1] regionserver.HStore(310): Store=782714c9895592e9a14e4144491fc645/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:16,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:16,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645 2023-07-13 15:16:16,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645 2023-07-13 15:16:16,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:16,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 782714c9895592e9a14e4144491fc645 2023-07-13 15:16:16,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:16,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,492 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 782714c9895592e9a14e4144491fc645; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9480665600, jitterRate=-0.11704421043395996}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 782714c9895592e9a14e4144491fc645: 2023-07-13 15:16:16,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:16,493 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3faf77970ff176e6c1dec41c397f3124; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10870721920, jitterRate=0.012414872646331787}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:16,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3faf77970ff176e6c1dec41c397f3124: 2023-07-13 15:16:16,493 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645., pid=39, masterSystemTime=1689261376307 2023-07-13 15:16:16,496 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124., pid=43, masterSystemTime=1689261376328 2023-07-13 15:16:16,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:16,497 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:16,499 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=782714c9895592e9a14e4144491fc645, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:16,500 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261376499"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376499"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376499"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376499"}]},"ts":"1689261376499"} 2023-07-13 15:16:16,502 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=3faf77970ff176e6c1dec41c397f3124, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:16,503 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261376502"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261376502"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261376502"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261376502"}]},"ts":"1689261376502"} 2023-07-13 15:16:16,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:16,514 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:16,522 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=29 2023-07-13 15:16:16,522 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=29, state=SUCCESS; OpenRegionProcedure 3faf77970ff176e6c1dec41c397f3124, server=jenkins-hbase4.apache.org,34275,1689261367926 in 330 msec 2023-07-13 15:16:16,523 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=25 2023-07-13 15:16:16,524 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=25, state=SUCCESS; OpenRegionProcedure 782714c9895592e9a14e4144491fc645, server=jenkins-hbase4.apache.org,43693,1689261373307 in 343 msec 2023-07-13 15:16:16,525 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3faf77970ff176e6c1dec41c397f3124, ASSIGN in 538 msec 2023-07-13 15:16:16,529 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=24 2023-07-13 15:16:16,529 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=782714c9895592e9a14e4144491fc645, ASSIGN in 540 msec 2023-07-13 15:16:16,531 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:16,531 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261376531"}]},"ts":"1689261376531"} 2023-07-13 15:16:16,534 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=ENABLED in hbase:meta 2023-07-13 15:16:16,537 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=24, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateMultiRegion execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:16,546 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion in 1.3090 sec 2023-07-13 15:16:17,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 15:16:17,352 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateMultiRegion, procId: 24 completed 2023-07-13 15:16:17,352 DEBUG [Listener at localhost/35161] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateMultiRegion get assigned. Timeout = 60000ms 2023-07-13 15:16:17,353 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:17,361 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateMultiRegion assigned to meta. Checking AM states. 2023-07-13 15:16:17,362 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:17,362 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateMultiRegion assigned. 2023-07-13 15:16:17,365 INFO [Listener at localhost/35161] client.HBaseAdmin$15(890): Started disable of Group_testCreateMultiRegion 2023-07-13 15:16:17,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateMultiRegion 2023-07-13 15:16:17,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=45, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:17,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-13 15:16:17,371 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261377371"}]},"ts":"1689261377371"} 2023-07-13 15:16:17,373 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLING in hbase:meta 2023-07-13 15:16:17,376 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testCreateMultiRegion to state=DISABLING 2023-07-13 15:16:17,381 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5d388672e5b8071d72e554eab3e1e298, UNASSIGN}, {pid=47, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8ac5168029360d92f05ba4adf26d125e, UNASSIGN}, {pid=48, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cce925c7292c9863994eb4ffb8b4bdd5, UNASSIGN}, {pid=49, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3faf77970ff176e6c1dec41c397f3124, UNASSIGN}, {pid=50, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a98098827adf98b2694320eec92db17f, UNASSIGN}, {pid=51, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=fd48e3c2a0303f6a03e03951e4f75f1d, UNASSIGN}, {pid=52, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3252e192329bbf43185c413b7aaaccea, UNASSIGN}, {pid=53, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4b10da3444497190f82aebe11c15260b, UNASSIGN}, {pid=54, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9651487d9ba599547c9fb995a3d301dc, UNASSIGN}, {pid=55, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=782714c9895592e9a14e4144491fc645, UNASSIGN}] 2023-07-13 15:16:17,385 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9651487d9ba599547c9fb995a3d301dc, UNASSIGN 2023-07-13 15:16:17,385 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4b10da3444497190f82aebe11c15260b, UNASSIGN 2023-07-13 15:16:17,386 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3252e192329bbf43185c413b7aaaccea, UNASSIGN 2023-07-13 15:16:17,386 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=782714c9895592e9a14e4144491fc645, UNASSIGN 2023-07-13 15:16:17,386 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=fd48e3c2a0303f6a03e03951e4f75f1d, UNASSIGN 2023-07-13 15:16:17,398 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=9651487d9ba599547c9fb995a3d301dc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:17,398 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261377398"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377398"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377398"}]},"ts":"1689261377398"} 2023-07-13 15:16:17,399 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=4b10da3444497190f82aebe11c15260b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:17,399 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=782714c9895592e9a14e4144491fc645, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:17,399 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377399"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377399"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377399"}]},"ts":"1689261377399"} 2023-07-13 15:16:17,399 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261377399"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377399"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377399"}]},"ts":"1689261377399"} 2023-07-13 15:16:17,400 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=3252e192329bbf43185c413b7aaaccea, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:17,400 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377399"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377399"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377399"}]},"ts":"1689261377399"} 2023-07-13 15:16:17,400 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=fd48e3c2a0303f6a03e03951e4f75f1d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:17,400 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377399"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377399"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377399"}]},"ts":"1689261377399"} 2023-07-13 15:16:17,401 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=54, state=RUNNABLE; CloseRegionProcedure 9651487d9ba599547c9fb995a3d301dc, server=jenkins-hbase4.apache.org,43693,1689261373307}] 2023-07-13 15:16:17,404 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=53, state=RUNNABLE; CloseRegionProcedure 4b10da3444497190f82aebe11c15260b, server=jenkins-hbase4.apache.org,41955,1689261371593}] 2023-07-13 15:16:17,405 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=55, state=RUNNABLE; CloseRegionProcedure 782714c9895592e9a14e4144491fc645, server=jenkins-hbase4.apache.org,43693,1689261373307}] 2023-07-13 15:16:17,406 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=52, state=RUNNABLE; CloseRegionProcedure 3252e192329bbf43185c413b7aaaccea, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:17,410 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=51, state=RUNNABLE; CloseRegionProcedure fd48e3c2a0303f6a03e03951e4f75f1d, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:17,410 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a98098827adf98b2694320eec92db17f, UNASSIGN 2023-07-13 15:16:17,416 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3faf77970ff176e6c1dec41c397f3124, UNASSIGN 2023-07-13 15:16:17,417 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=a98098827adf98b2694320eec92db17f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:17,417 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689261375227.a98098827adf98b2694320eec92db17f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377417"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377417"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377417"}]},"ts":"1689261377417"} 2023-07-13 15:16:17,419 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cce925c7292c9863994eb4ffb8b4bdd5, UNASSIGN 2023-07-13 15:16:17,419 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=3faf77970ff176e6c1dec41c397f3124, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:17,419 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377419"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377419"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377419"}]},"ts":"1689261377419"} 2023-07-13 15:16:17,421 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=50, state=RUNNABLE; CloseRegionProcedure a98098827adf98b2694320eec92db17f, server=jenkins-hbase4.apache.org,43693,1689261373307}] 2023-07-13 15:16:17,421 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8ac5168029360d92f05ba4adf26d125e, UNASSIGN 2023-07-13 15:16:17,421 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=cce925c7292c9863994eb4ffb8b4bdd5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:17,421 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377421"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377421"}]},"ts":"1689261377421"} 2023-07-13 15:16:17,422 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=49, state=RUNNABLE; CloseRegionProcedure 3faf77970ff176e6c1dec41c397f3124, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:17,423 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=45, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5d388672e5b8071d72e554eab3e1e298, UNASSIGN 2023-07-13 15:16:17,424 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=8ac5168029360d92f05ba4adf26d125e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:17,425 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689261375227.8ac5168029360d92f05ba4adf26d125e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377424"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377424"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377424"}]},"ts":"1689261377424"} 2023-07-13 15:16:17,425 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=5d388672e5b8071d72e554eab3e1e298, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:17,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=48, state=RUNNABLE; CloseRegionProcedure cce925c7292c9863994eb4ffb8b4bdd5, server=jenkins-hbase4.apache.org,41955,1689261371593}] 2023-07-13 15:16:17,425 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377425"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261377425"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261377425"}]},"ts":"1689261377425"} 2023-07-13 15:16:17,428 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=47, state=RUNNABLE; CloseRegionProcedure 8ac5168029360d92f05ba4adf26d125e, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:17,431 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=46, state=RUNNABLE; CloseRegionProcedure 5d388672e5b8071d72e554eab3e1e298, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:17,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-13 15:16:17,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:17,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9651487d9ba599547c9fb995a3d301dc, disabling compactions & flushes 2023-07-13 15:16:17,559 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:17,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:17,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. after waiting 0 ms 2023-07-13 15:16:17,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:17,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:17,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4b10da3444497190f82aebe11c15260b, disabling compactions & flushes 2023-07-13 15:16:17,561 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:17,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:17,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. after waiting 0 ms 2023-07-13 15:16:17,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:17,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:17,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8ac5168029360d92f05ba4adf26d125e, disabling compactions & flushes 2023-07-13 15:16:17,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:17,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:17,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. after waiting 0 ms 2023-07-13 15:16:17,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:17,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:17,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5d388672e5b8071d72e554eab3e1e298, disabling compactions & flushes 2023-07-13 15:16:17,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:17,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:17,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. after waiting 0 ms 2023-07-13 15:16:17,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:17,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b. 2023-07-13 15:16:17,591 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4b10da3444497190f82aebe11c15260b: 2023-07-13 15:16:17,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc. 2023-07-13 15:16:17,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9651487d9ba599547c9fb995a3d301dc: 2023-07-13 15:16:17,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:17,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:17,596 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=4b10da3444497190f82aebe11c15260b, regionState=CLOSED 2023-07-13 15:16:17,596 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377596"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377596"}]},"ts":"1689261377596"} 2023-07-13 15:16:17,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:17,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a98098827adf98b2694320eec92db17f 2023-07-13 15:16:17,598 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=9651487d9ba599547c9fb995a3d301dc, regionState=CLOSED 2023-07-13 15:16:17,599 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261377598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377598"}]},"ts":"1689261377598"} 2023-07-13 15:16:17,606 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=54 2023-07-13 15:16:17,606 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=54, state=SUCCESS; CloseRegionProcedure 9651487d9ba599547c9fb995a3d301dc, server=jenkins-hbase4.apache.org,43693,1689261373307 in 200 msec 2023-07-13 15:16:17,606 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=53 2023-07-13 15:16:17,606 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=53, state=SUCCESS; CloseRegionProcedure 4b10da3444497190f82aebe11c15260b, server=jenkins-hbase4.apache.org,41955,1689261371593 in 195 msec 2023-07-13 15:16:17,608 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=4b10da3444497190f82aebe11c15260b, UNASSIGN in 225 msec 2023-07-13 15:16:17,608 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=9651487d9ba599547c9fb995a3d301dc, UNASSIGN in 225 msec 2023-07-13 15:16:17,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cce925c7292c9863994eb4ffb8b4bdd5, disabling compactions & flushes 2023-07-13 15:16:17,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:17,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:17,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. after waiting 0 ms 2023-07-13 15:16:17,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:17,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a98098827adf98b2694320eec92db17f, disabling compactions & flushes 2023-07-13 15:16:17,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:17,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:17,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. after waiting 0 ms 2023-07-13 15:16:17,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:17,615 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e. 2023-07-13 15:16:17,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8ac5168029360d92f05ba4adf26d125e: 2023-07-13 15:16:17,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298. 2023-07-13 15:16:17,619 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5d388672e5b8071d72e554eab3e1e298: 2023-07-13 15:16:17,619 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:17,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:17,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3252e192329bbf43185c413b7aaaccea, disabling compactions & flushes 2023-07-13 15:16:17,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:17,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:17,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. after waiting 0 ms 2023-07-13 15:16:17,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:17,636 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=8ac5168029360d92f05ba4adf26d125e, regionState=CLOSED 2023-07-13 15:16:17,636 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689261375227.8ac5168029360d92f05ba4adf26d125e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377635"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377635"}]},"ts":"1689261377635"} 2023-07-13 15:16:17,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:17,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:17,643 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=5d388672e5b8071d72e554eab3e1e298, regionState=CLOSED 2023-07-13 15:16:17,643 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377643"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377643"}]},"ts":"1689261377643"} 2023-07-13 15:16:17,645 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=47 2023-07-13 15:16:17,646 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=47, state=SUCCESS; CloseRegionProcedure 8ac5168029360d92f05ba4adf26d125e, server=jenkins-hbase4.apache.org,36737,1689261368119 in 210 msec 2023-07-13 15:16:17,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,648 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea. 2023-07-13 15:16:17,649 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f. 2023-07-13 15:16:17,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a98098827adf98b2694320eec92db17f: 2023-07-13 15:16:17,650 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=8ac5168029360d92f05ba4adf26d125e, UNASSIGN in 268 msec 2023-07-13 15:16:17,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3252e192329bbf43185c413b7aaaccea: 2023-07-13 15:16:17,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3faf77970ff176e6c1dec41c397f3124, disabling compactions & flushes 2023-07-13 15:16:17,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:17,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:17,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. after waiting 0 ms 2023-07-13 15:16:17,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:17,653 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5. 2023-07-13 15:16:17,653 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cce925c7292c9863994eb4ffb8b4bdd5: 2023-07-13 15:16:17,654 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=46 2023-07-13 15:16:17,655 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=46, state=SUCCESS; CloseRegionProcedure 5d388672e5b8071d72e554eab3e1e298, server=jenkins-hbase4.apache.org,34275,1689261367926 in 214 msec 2023-07-13 15:16:17,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:17,658 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=3252e192329bbf43185c413b7aaaccea, regionState=CLOSED 2023-07-13 15:16:17,658 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377658"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377658"}]},"ts":"1689261377658"} 2023-07-13 15:16:17,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a98098827adf98b2694320eec92db17f 2023-07-13 15:16:17,660 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 782714c9895592e9a14e4144491fc645 2023-07-13 15:16:17,660 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=5d388672e5b8071d72e554eab3e1e298, UNASSIGN in 277 msec 2023-07-13 15:16:17,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124. 2023-07-13 15:16:17,662 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3faf77970ff176e6c1dec41c397f3124: 2023-07-13 15:16:17,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 782714c9895592e9a14e4144491fc645, disabling compactions & flushes 2023-07-13 15:16:17,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:17,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:17,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. after waiting 0 ms 2023-07-13 15:16:17,664 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:17,664 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=a98098827adf98b2694320eec92db17f, regionState=CLOSED 2023-07-13 15:16:17,664 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689261375227.a98098827adf98b2694320eec92db17f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377664"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377664"}]},"ts":"1689261377664"} 2023-07-13 15:16:17,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:17,667 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=cce925c7292c9863994eb4ffb8b4bdd5, regionState=CLOSED 2023-07-13 15:16:17,668 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377667"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377667"}]},"ts":"1689261377667"} 2023-07-13 15:16:17,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:17,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:17,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fd48e3c2a0303f6a03e03951e4f75f1d, disabling compactions & flushes 2023-07-13 15:16:17,671 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:17,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:17,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. after waiting 0 ms 2023-07-13 15:16:17,671 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:17,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-13 15:16:17,677 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=3faf77970ff176e6c1dec41c397f3124, regionState=CLOSED 2023-07-13 15:16:17,677 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377677"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377677"}]},"ts":"1689261377677"} 2023-07-13 15:16:17,678 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=52 2023-07-13 15:16:17,678 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=52, state=SUCCESS; CloseRegionProcedure 3252e192329bbf43185c413b7aaaccea, server=jenkins-hbase4.apache.org,36737,1689261368119 in 258 msec 2023-07-13 15:16:17,684 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,685 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645. 2023-07-13 15:16:17,685 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 782714c9895592e9a14e4144491fc645: 2023-07-13 15:16:17,688 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 782714c9895592e9a14e4144491fc645 2023-07-13 15:16:17,689 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=50 2023-07-13 15:16:17,689 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=50, state=SUCCESS; CloseRegionProcedure a98098827adf98b2694320eec92db17f, server=jenkins-hbase4.apache.org,43693,1689261373307 in 247 msec 2023-07-13 15:16:17,689 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=782714c9895592e9a14e4144491fc645, regionState=CLOSED 2023-07-13 15:16:17,689 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3252e192329bbf43185c413b7aaaccea, UNASSIGN in 297 msec 2023-07-13 15:16:17,690 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689261377689"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377689"}]},"ts":"1689261377689"} 2023-07-13 15:16:17,691 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=48 2023-07-13 15:16:17,691 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=48, state=SUCCESS; CloseRegionProcedure cce925c7292c9863994eb4ffb8b4bdd5, server=jenkins-hbase4.apache.org,41955,1689261371593 in 261 msec 2023-07-13 15:16:17,693 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=a98098827adf98b2694320eec92db17f, UNASSIGN in 308 msec 2023-07-13 15:16:17,695 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=cce925c7292c9863994eb4ffb8b4bdd5, UNASSIGN in 313 msec 2023-07-13 15:16:17,699 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=49 2023-07-13 15:16:17,699 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=49, state=SUCCESS; CloseRegionProcedure 3faf77970ff176e6c1dec41c397f3124, server=jenkins-hbase4.apache.org,34275,1689261367926 in 268 msec 2023-07-13 15:16:17,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:17,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=55 2023-07-13 15:16:17,702 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=3faf77970ff176e6c1dec41c397f3124, UNASSIGN in 321 msec 2023-07-13 15:16:17,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; CloseRegionProcedure 782714c9895592e9a14e4144491fc645, server=jenkins-hbase4.apache.org,43693,1689261373307 in 288 msec 2023-07-13 15:16:17,705 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=782714c9895592e9a14e4144491fc645, UNASSIGN in 321 msec 2023-07-13 15:16:17,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d. 2023-07-13 15:16:17,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fd48e3c2a0303f6a03e03951e4f75f1d: 2023-07-13 15:16:17,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:17,712 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=fd48e3c2a0303f6a03e03951e4f75f1d, regionState=CLOSED 2023-07-13 15:16:17,712 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261377712"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261377712"}]},"ts":"1689261377712"} 2023-07-13 15:16:17,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=51 2023-07-13 15:16:17,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=51, state=SUCCESS; CloseRegionProcedure fd48e3c2a0303f6a03e03951e4f75f1d, server=jenkins-hbase4.apache.org,34275,1689261367926 in 304 msec 2023-07-13 15:16:17,721 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=45 2023-07-13 15:16:17,721 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=45, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateMultiRegion, region=fd48e3c2a0303f6a03e03951e4f75f1d, UNASSIGN in 336 msec 2023-07-13 15:16:17,722 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261377722"}]},"ts":"1689261377722"} 2023-07-13 15:16:17,724 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateMultiRegion, state=DISABLED in hbase:meta 2023-07-13 15:16:17,727 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testCreateMultiRegion to state=DISABLED 2023-07-13 15:16:17,730 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion in 363 msec 2023-07-13 15:16:17,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=45 2023-07-13 15:16:17,977 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateMultiRegion, procId: 45 completed 2023-07-13 15:16:17,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateMultiRegion 2023-07-13 15:16:17,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:17,981 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=66, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:17,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateMultiRegion' from rsgroup 'default' 2023-07-13 15:16:17,982 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=66, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:17,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:17,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:17,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:17,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-13 15:16:18,002 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:18,002 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:18,002 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:18,002 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:18,002 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:18,002 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f 2023-07-13 15:16:18,002 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:18,002 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:18,006 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298/recovered.edits] 2023-07-13 15:16:18,007 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea/recovered.edits] 2023-07-13 15:16:18,008 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e/recovered.edits] 2023-07-13 15:16:18,008 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d/recovered.edits] 2023-07-13 15:16:18,008 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5/recovered.edits] 2023-07-13 15:16:18,009 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f/recovered.edits] 2023-07-13 15:16:18,009 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124/recovered.edits] 2023-07-13 15:16:18,009 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b/recovered.edits] 2023-07-13 15:16:18,028 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298/recovered.edits/4.seqid 2023-07-13 15:16:18,030 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e/recovered.edits/4.seqid 2023-07-13 15:16:18,030 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/5d388672e5b8071d72e554eab3e1e298 2023-07-13 15:16:18,031 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:18,032 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea/recovered.edits/4.seqid 2023-07-13 15:16:18,033 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/8ac5168029360d92f05ba4adf26d125e 2023-07-13 15:16:18,033 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645 2023-07-13 15:16:18,033 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124/recovered.edits/4.seqid 2023-07-13 15:16:18,033 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f/recovered.edits/4.seqid 2023-07-13 15:16:18,034 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d/recovered.edits/4.seqid 2023-07-13 15:16:18,035 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3252e192329bbf43185c413b7aaaccea 2023-07-13 15:16:18,035 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b/recovered.edits/4.seqid 2023-07-13 15:16:18,036 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/3faf77970ff176e6c1dec41c397f3124 2023-07-13 15:16:18,036 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/a98098827adf98b2694320eec92db17f 2023-07-13 15:16:18,036 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/fd48e3c2a0303f6a03e03951e4f75f1d 2023-07-13 15:16:18,037 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5/recovered.edits/4.seqid 2023-07-13 15:16:18,037 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc/recovered.edits] 2023-07-13 15:16:18,037 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/4b10da3444497190f82aebe11c15260b 2023-07-13 15:16:18,037 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/cce925c7292c9863994eb4ffb8b4bdd5 2023-07-13 15:16:18,038 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645/recovered.edits] 2023-07-13 15:16:18,050 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc/recovered.edits/4.seqid 2023-07-13 15:16:18,051 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645/recovered.edits/4.seqid 2023-07-13 15:16:18,051 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/9651487d9ba599547c9fb995a3d301dc 2023-07-13 15:16:18,051 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateMultiRegion/782714c9895592e9a14e4144491fc645 2023-07-13 15:16:18,051 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCreateMultiRegion regions 2023-07-13 15:16:18,055 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=66, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:18,068 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 10 rows of Group_testCreateMultiRegion from hbase:meta 2023-07-13 15:16:18,072 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateMultiRegion' descriptor. 2023-07-13 15:16:18,074 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=66, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:18,075 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateMultiRegion' from region states. 2023-07-13 15:16:18,075 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x02\\x04\\x06\\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,075 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\"$\u0026(,1689261375227.8ac5168029360d92f05ba4adf26d125e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,075 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,075 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,075 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\x82\\x84\\x86\\x88,1689261375227.a98098827adf98b2694320eec92db17f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,075 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xA2\\xA4\\xA6\\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,076 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xC2\\xC4\\xC6\\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,076 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x00\\xE2\\xE4\\xE6\\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,076 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,\\x01\\x03\\x05\\x07\\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,076 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261378075"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,079 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 10 regions from META 2023-07-13 15:16:18,079 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5d388672e5b8071d72e554eab3e1e298, NAME => 'Group_testCreateMultiRegion,\x00\x02\x04\x06\x08,1689261375227.5d388672e5b8071d72e554eab3e1e298.', STARTKEY => '\x00\x02\x04\x06\x08', ENDKEY => '\x00"$&('}, {ENCODED => 8ac5168029360d92f05ba4adf26d125e, NAME => 'Group_testCreateMultiRegion,\x00"$&(,1689261375227.8ac5168029360d92f05ba4adf26d125e.', STARTKEY => '\x00"$&(', ENDKEY => '\x00BDFH'}, {ENCODED => cce925c7292c9863994eb4ffb8b4bdd5, NAME => 'Group_testCreateMultiRegion,\x00BDFH,1689261375227.cce925c7292c9863994eb4ffb8b4bdd5.', STARTKEY => '\x00BDFH', ENDKEY => '\x00bdfh'}, {ENCODED => 3faf77970ff176e6c1dec41c397f3124, NAME => 'Group_testCreateMultiRegion,\x00bdfh,1689261375227.3faf77970ff176e6c1dec41c397f3124.', STARTKEY => '\x00bdfh', ENDKEY => '\x00\x82\x84\x86\x88'}, {ENCODED => a98098827adf98b2694320eec92db17f, NAME => 'Group_testCreateMultiRegion,\x00\x82\x84\x86\x88,1689261375227.a98098827adf98b2694320eec92db17f.', STARTKEY => '\x00\x82\x84\x86\x88', ENDKEY => '\x00\xA2\xA4\xA6\xA8'}, {ENCODED => fd48e3c2a0303f6a03e03951e4f75f1d, NAME => 'Group_testCreateMultiRegion,\x00\xA2\xA4\xA6\xA8,1689261375227.fd48e3c2a0303f6a03e03951e4f75f1d.', STARTKEY => '\x00\xA2\xA4\xA6\xA8', ENDKEY => '\x00\xC2\xC4\xC6\xC8'}, {ENCODED => 3252e192329bbf43185c413b7aaaccea, NAME => 'Group_testCreateMultiRegion,\x00\xC2\xC4\xC6\xC8,1689261375227.3252e192329bbf43185c413b7aaaccea.', STARTKEY => '\x00\xC2\xC4\xC6\xC8', ENDKEY => '\x00\xE2\xE4\xE6\xE8'}, {ENCODED => 4b10da3444497190f82aebe11c15260b, NAME => 'Group_testCreateMultiRegion,\x00\xE2\xE4\xE6\xE8,1689261375227.4b10da3444497190f82aebe11c15260b.', STARTKEY => '\x00\xE2\xE4\xE6\xE8', ENDKEY => '\x01\x03\x05\x07\x09'}, {ENCODED => 9651487d9ba599547c9fb995a3d301dc, NAME => 'Group_testCreateMultiRegion,\x01\x03\x05\x07\x09,1689261375227.9651487d9ba599547c9fb995a3d301dc.', STARTKEY => '\x01\x03\x05\x07\x09', ENDKEY => ''}, {ENCODED => 782714c9895592e9a14e4144491fc645, NAME => 'Group_testCreateMultiRegion,,1689261375227.782714c9895592e9a14e4144491fc645.', STARTKEY => '', ENDKEY => '\x00\x02\x04\x06\x08'}] 2023-07-13 15:16:18,079 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateMultiRegion' as deleted. 2023-07-13 15:16:18,079 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateMultiRegion","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261378079"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:18,081 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateMultiRegion state from META 2023-07-13 15:16:18,084 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=66, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:18,087 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion in 106 msec 2023-07-13 15:16:18,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-13 15:16:18,093 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateMultiRegion, procId: 66 completed 2023-07-13 15:16:18,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:18,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:18,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:18,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:18,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:18,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:18,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:18,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:18,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:18,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:18,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:18,115 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:18,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:18,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:18,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:18,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:18,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:18,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:18,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:18,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:18,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:18,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 250 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262578132, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:18,133 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:18,135 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:18,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:18,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:18,136 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:18,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:18,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:18,156 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateMultiRegion Thread=506 (was 497) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50aa0278-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_147708220_17 at /127.0.0.1:42976 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1632709426_17 at /127.0.0.1:59092 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1366339281_17 at /127.0.0.1:49132 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=787 (was 759) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=499 (was 499), ProcessCount=172 (was 172), AvailableMemoryMB=4329 (was 4506) 2023-07-13 15:16:18,156 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-13 15:16:18,173 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=506, OpenFileDescriptor=787, MaxFileDescriptor=60000, SystemLoadAverage=499, ProcessCount=172, AvailableMemoryMB=4329 2023-07-13 15:16:18,173 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-13 15:16:18,174 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testNamespaceCreateAndAssign 2023-07-13 15:16:18,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:18,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:18,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:18,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:18,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:18,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:18,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:18,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:18,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:18,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:18,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:18,196 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:18,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:18,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:18,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:18,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:18,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:18,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:18,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:18,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:18,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:18,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 278 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262578212, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:18,213 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:18,215 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:18,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:18,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:18,216 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:18,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:18,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:18,218 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(118): testNamespaceCreateAndAssign 2023-07-13 15:16:18,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:18,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:18,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup appInfo 2023-07-13 15:16:18,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:18,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:18,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:18,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:18,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:18,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:18,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:18,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34275] to rsgroup appInfo 2023-07-13 15:16:18,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:18,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:18,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:18,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:18,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(238): Moving server region 8f9b3c3c0c701a7e057738cfe2a31027, which do not belong to RSGroup appInfo 2023-07-13 15:16:18,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:18,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:18,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:18,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:18,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:18,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=67, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE 2023-07-13 15:16:18,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup appInfo 2023-07-13 15:16:18,244 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE 2023-07-13 15:16:18,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:18,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:18,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:18,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:18,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:18,245 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:18,245 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261378245"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261378245"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261378245"}]},"ts":"1689261378245"} 2023-07-13 15:16:18,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=68, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 15:16:18,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-13 15:16:18,247 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 15:16:18,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=67, state=RUNNABLE; CloseRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:18,248 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34275,1689261367926, state=CLOSING 2023-07-13 15:16:18,250 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:18,250 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:18,250 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=68, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:18,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:18,401 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-13 15:16:18,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8f9b3c3c0c701a7e057738cfe2a31027, disabling compactions & flushes 2023-07-13 15:16:18,403 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:18,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:18,403 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:18,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:18,403 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:18,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. after waiting 0 ms 2023-07-13 15:16:18,403 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:18,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:18,403 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:18,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8f9b3c3c0c701a7e057738cfe2a31027 1/1 column families, dataSize=150 B heapSize=632 B 2023-07-13 15:16:18,403 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=32.96 KB heapSize=52.89 KB 2023-07-13 15:16:18,425 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=150 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/.tmp/info/8877d21c56c24ede9d59119e77b5fd77 2023-07-13 15:16:18,429 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=29.90 KB at sequenceid=79 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:18,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/.tmp/info/8877d21c56c24ede9d59119e77b5fd77 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/8877d21c56c24ede9d59119e77b5fd77 2023-07-13 15:16:18,440 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:18,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/8877d21c56c24ede9d59119e77b5fd77, entries=3, sequenceid=7, filesize=4.9 K 2023-07-13 15:16:18,449 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~150 B/150, heapSize ~616 B/616, currentSize=0 B/0 for 8f9b3c3c0c701a7e057738cfe2a31027 in 46ms, sequenceid=7, compaction requested=false 2023-07-13 15:16:18,460 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.19 KB at sequenceid=79 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/rep_barrier/eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:18,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-13 15:16:18,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:18,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:18,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8f9b3c3c0c701a7e057738cfe2a31027 move to jenkins-hbase4.apache.org,36737,1689261368119 record at close sequenceid=7 2023-07-13 15:16:18,463 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=69, ppid=67, state=RUNNABLE; CloseRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:18,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:18,467 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:18,482 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.87 KB at sequenceid=79 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/table/3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:18,487 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:18,489 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/6c4d1a30fa324b9292b3c505317b9f7f as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:18,496 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:18,496 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/6c4d1a30fa324b9292b3c505317b9f7f, entries=48, sequenceid=79, filesize=10.2 K 2023-07-13 15:16:18,497 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/rep_barrier/eccc45434a07413990578d9b62a2e144 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:18,504 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:18,504 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/eccc45434a07413990578d9b62a2e144, entries=11, sequenceid=79, filesize=6.1 K 2023-07-13 15:16:18,505 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/table/3f9c4b698d1c4d0292338c1574eb859a as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:18,512 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:18,513 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/3f9c4b698d1c4d0292338c1574eb859a, entries=17, sequenceid=79, filesize=6.2 K 2023-07-13 15:16:18,514 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~32.96 KB/33749, heapSize ~52.84 KB/54112, currentSize=0 B/0 for 1588230740 in 111ms, sequenceid=79, compaction requested=false 2023-07-13 15:16:18,523 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/recovered.edits/82.seqid, newMaxSeqId=82, maxSeqId=1 2023-07-13 15:16:18,524 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:18,525 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:18,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:18,525 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,36737,1689261368119 record at close sequenceid=79 2023-07-13 15:16:18,527 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-13 15:16:18,527 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-13 15:16:18,529 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=68 2023-07-13 15:16:18,529 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=68, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34275,1689261367926 in 277 msec 2023-07-13 15:16:18,530 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=68, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36737,1689261368119; forceNewPlan=false, retain=false 2023-07-13 15:16:18,680 INFO [jenkins-hbase4:38141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:18,681 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36737,1689261368119, state=OPENING 2023-07-13 15:16:18,682 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:18,682 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:18,682 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=68, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:18,839 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 15:16:18,840 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:18,842 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36737%2C1689261368119.meta, suffix=.meta, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:18,860 DEBUG [RS-EventLoopGroup-8-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:18,861 DEBUG [RS-EventLoopGroup-8-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:18,861 DEBUG [RS-EventLoopGroup-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:18,867 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119/jenkins-hbase4.apache.org%2C36737%2C1689261368119.meta.1689261378843.meta 2023-07-13 15:16:18,867 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK]] 2023-07-13 15:16:18,867 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:18,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:18,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 15:16:18,868 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 15:16:18,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 15:16:18,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:18,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 15:16:18,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 15:16:18,870 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:18,871 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:18,871 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:18,872 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:18,881 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:18,882 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:18,882 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:18,882 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:18,883 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:18,883 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:18,884 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:18,891 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:18,891 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:18,891 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:18,891 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:18,893 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:18,893 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:18,893 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:18,902 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:18,902 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:18,902 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:18,903 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:18,905 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:18,909 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:18,911 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:18,912 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=83; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12010549440, jitterRate=0.11856958270072937}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:18,912 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:18,916 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=71, masterSystemTime=1689261378835 2023-07-13 15:16:18,918 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 15:16:18,918 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 15:16:18,923 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36737,1689261368119, state=OPEN 2023-07-13 15:16:18,924 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:18,924 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:18,925 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=CLOSED 2023-07-13 15:16:18,925 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261378925"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261378925"}]},"ts":"1689261378925"} 2023-07-13 15:16:18,926 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34275] ipc.CallRunner(144): callId: 180 service: ClientService methodName: Mutate size: 218 connection: 172.31.14.131:52190 deadline: 1689261438926, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=36737 startCode=1689261368119. As of locationSeqNum=79. 2023-07-13 15:16:18,927 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=68 2023-07-13 15:16:18,927 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=68, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36737,1689261368119 in 242 msec 2023-07-13 15:16:18,929 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 682 msec 2023-07-13 15:16:19,027 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:19,028 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37130, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:19,033 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=67 2023-07-13 15:16:19,033 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=67, state=SUCCESS; CloseRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,34275,1689261367926 in 782 msec 2023-07-13 15:16:19,033 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=67, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,36737,1689261368119; forceNewPlan=false, retain=false 2023-07-13 15:16:19,184 INFO [jenkins-hbase4:38141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:19,184 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:19,184 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261379184"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261379184"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261379184"}]},"ts":"1689261379184"} 2023-07-13 15:16:19,186 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=67, state=RUNNABLE; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:19,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure.ProcedureSyncWait(216): waitFor pid=67 2023-07-13 15:16:19,342 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:19,343 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f9b3c3c0c701a7e057738cfe2a31027, NAME => 'hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:19,343 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:19,343 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:19,343 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:19,343 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:19,344 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:19,345 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:19,346 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:19,346 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f9b3c3c0c701a7e057738cfe2a31027 columnFamilyName info 2023-07-13 15:16:19,355 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/8877d21c56c24ede9d59119e77b5fd77 2023-07-13 15:16:19,355 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(310): Store=8f9b3c3c0c701a7e057738cfe2a31027/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:19,356 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:19,357 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:19,361 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:19,362 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8f9b3c3c0c701a7e057738cfe2a31027; next sequenceid=11; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10669712320, jitterRate=-0.006305605173110962}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:19,362 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:19,363 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027., pid=72, masterSystemTime=1689261379338 2023-07-13 15:16:19,365 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:19,365 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:19,365 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPEN, openSeqNum=11, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:19,366 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261379365"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261379365"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261379365"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261379365"}]},"ts":"1689261379365"} 2023-07-13 15:16:19,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=67 2023-07-13 15:16:19,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=67, state=SUCCESS; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,36737,1689261368119 in 181 msec 2023-07-13 15:16:19,371 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE in 1.1270 sec 2023-07-13 15:16:20,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34275,1689261367926] are moved back to default 2023-07-13 15:16:20,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-13 15:16:20,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:20,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:20,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:20,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=appInfo 2023-07-13 15:16:20,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:20,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'appInfo'} 2023-07-13 15:16:20,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=73, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:20,262 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34275] ipc.CallRunner(144): callId: 185 service: ClientService methodName: Get size: 120 connection: 172.31.14.131:52190 deadline: 1689261440262, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=36737 startCode=1689261368119. As of locationSeqNum=7. 2023-07-13 15:16:20,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=73 2023-07-13 15:16:20,377 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:20,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 120 msec 2023-07-13 15:16:20,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=73 2023-07-13 15:16:20,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:20,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:20,480 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:20,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_foo" qualifier: "Group_testCreateAndAssign" procId is: 74 2023-07-13 15:16:20,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-13 15:16:20,483 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:20,483 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:20,484 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:20,484 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:20,488 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:20,489 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:20,490 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d empty. 2023-07-13 15:16:20,491 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:20,491 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-13 15:16:20,514 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_foo/Group_testCreateAndAssign/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:20,515 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(7675): creating {ENCODED => 185fbf0c037aee0bfb8edc05f7a1645d, NAME => 'Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_foo:Group_testCreateAndAssign', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:20,529 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:20,529 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1604): Closing 185fbf0c037aee0bfb8edc05f7a1645d, disabling compactions & flushes 2023-07-13 15:16:20,529 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:20,529 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:20,529 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. after waiting 0 ms 2023-07-13 15:16:20,529 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:20,529 INFO [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:20,529 DEBUG [RegionOpenAndInit-Group_foo:Group_testCreateAndAssign-pool-0] regionserver.HRegion(1558): Region close journal for 185fbf0c037aee0bfb8edc05f7a1645d: 2023-07-13 15:16:20,532 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:20,533 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689261380533"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261380533"}]},"ts":"1689261380533"} 2023-07-13 15:16:20,535 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:20,536 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:20,536 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261380536"}]},"ts":"1689261380536"} 2023-07-13 15:16:20,538 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLING in hbase:meta 2023-07-13 15:16:20,541 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=185fbf0c037aee0bfb8edc05f7a1645d, ASSIGN}] 2023-07-13 15:16:20,543 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, ppid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=185fbf0c037aee0bfb8edc05f7a1645d, ASSIGN 2023-07-13 15:16:20,544 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=75, ppid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=185fbf0c037aee0bfb8edc05f7a1645d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34275,1689261367926; forceNewPlan=false, retain=false 2023-07-13 15:16:20,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-13 15:16:20,695 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=185fbf0c037aee0bfb8edc05f7a1645d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:20,695 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689261380695"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261380695"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261380695"}]},"ts":"1689261380695"} 2023-07-13 15:16:20,697 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; OpenRegionProcedure 185fbf0c037aee0bfb8edc05f7a1645d, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:20,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-13 15:16:20,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:20,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 185fbf0c037aee0bfb8edc05f7a1645d, NAME => 'Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:20,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndAssign 185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:20,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:20,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:20,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:20,867 INFO [StoreOpener-185fbf0c037aee0bfb8edc05f7a1645d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:20,868 DEBUG [StoreOpener-185fbf0c037aee0bfb8edc05f7a1645d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d/f 2023-07-13 15:16:20,869 DEBUG [StoreOpener-185fbf0c037aee0bfb8edc05f7a1645d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d/f 2023-07-13 15:16:20,869 INFO [StoreOpener-185fbf0c037aee0bfb8edc05f7a1645d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 185fbf0c037aee0bfb8edc05f7a1645d columnFamilyName f 2023-07-13 15:16:20,870 INFO [StoreOpener-185fbf0c037aee0bfb8edc05f7a1645d-1] regionserver.HStore(310): Store=185fbf0c037aee0bfb8edc05f7a1645d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:20,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:20,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:20,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:20,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:20,879 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 185fbf0c037aee0bfb8edc05f7a1645d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10237226720, jitterRate=-0.04658396542072296}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:20,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 185fbf0c037aee0bfb8edc05f7a1645d: 2023-07-13 15:16:20,880 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d., pid=76, masterSystemTime=1689261380856 2023-07-13 15:16:20,882 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:20,882 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:20,883 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=185fbf0c037aee0bfb8edc05f7a1645d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:20,883 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689261380883"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261380883"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261380883"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261380883"}]},"ts":"1689261380883"} 2023-07-13 15:16:20,886 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-13 15:16:20,887 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; OpenRegionProcedure 185fbf0c037aee0bfb8edc05f7a1645d, server=jenkins-hbase4.apache.org,34275,1689261367926 in 188 msec 2023-07-13 15:16:20,889 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=74 2023-07-13 15:16:20,889 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=74, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=185fbf0c037aee0bfb8edc05f7a1645d, ASSIGN in 346 msec 2023-07-13 15:16:20,889 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:20,890 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261380890"}]},"ts":"1689261380890"} 2023-07-13 15:16:20,891 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=ENABLED in hbase:meta 2023-07-13 15:16:20,894 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=74, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:20,897 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign in 417 msec 2023-07-13 15:16:21,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-13 15:16:21,164 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 74 completed 2023-07-13 15:16:21,164 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:21,170 INFO [Listener at localhost/35161] client.HBaseAdmin$15(890): Started disable of Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-13 15:16:21,175 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261381175"}]},"ts":"1689261381175"} 2023-07-13 15:16:21,176 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLING in hbase:meta 2023-07-13 15:16:21,178 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_foo:Group_testCreateAndAssign to state=DISABLING 2023-07-13 15:16:21,179 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=77, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=185fbf0c037aee0bfb8edc05f7a1645d, UNASSIGN}] 2023-07-13 15:16:21,181 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, ppid=77, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=185fbf0c037aee0bfb8edc05f7a1645d, UNASSIGN 2023-07-13 15:16:21,182 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=185fbf0c037aee0bfb8edc05f7a1645d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:21,182 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689261381182"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261381182"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261381182"}]},"ts":"1689261381182"} 2023-07-13 15:16:21,184 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure 185fbf0c037aee0bfb8edc05f7a1645d, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:21,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-13 15:16:21,336 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:21,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 185fbf0c037aee0bfb8edc05f7a1645d, disabling compactions & flushes 2023-07-13 15:16:21,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:21,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:21,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. after waiting 0 ms 2023-07-13 15:16:21,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:21,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:21,342 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d. 2023-07-13 15:16:21,342 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 185fbf0c037aee0bfb8edc05f7a1645d: 2023-07-13 15:16:21,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:21,344 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=185fbf0c037aee0bfb8edc05f7a1645d, regionState=CLOSED 2023-07-13 15:16:21,345 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.","families":{"info":[{"qualifier":"regioninfo","vlen":61,"tag":[],"timestamp":"1689261381344"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261381344"}]},"ts":"1689261381344"} 2023-07-13 15:16:21,348 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-13 15:16:21,348 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure 185fbf0c037aee0bfb8edc05f7a1645d, server=jenkins-hbase4.apache.org,34275,1689261367926 in 162 msec 2023-07-13 15:16:21,349 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=77 2023-07-13 15:16:21,349 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=77, state=SUCCESS; TransitRegionStateProcedure table=Group_foo:Group_testCreateAndAssign, region=185fbf0c037aee0bfb8edc05f7a1645d, UNASSIGN in 169 msec 2023-07-13 15:16:21,350 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261381350"}]},"ts":"1689261381350"} 2023-07-13 15:16:21,351 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_foo:Group_testCreateAndAssign, state=DISABLED in hbase:meta 2023-07-13 15:16:21,354 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_foo:Group_testCreateAndAssign to state=DISABLED 2023-07-13 15:16:21,355 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign in 184 msec 2023-07-13 15:16:21,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-13 15:16:21,477 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 77 completed 2023-07-13 15:16:21,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,481 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_foo:Group_testCreateAndAssign' from rsgroup 'appInfo' 2023-07-13 15:16:21,482 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:21,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:21,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:21,487 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:21,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-13 15:16:21,489 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d/recovered.edits] 2023-07-13 15:16:21,496 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d/recovered.edits/4.seqid 2023-07-13 15:16:21,497 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_foo/Group_testCreateAndAssign/185fbf0c037aee0bfb8edc05f7a1645d 2023-07-13 15:16:21,497 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_foo:Group_testCreateAndAssign regions 2023-07-13 15:16:21,500 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,502 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_foo:Group_testCreateAndAssign from hbase:meta 2023-07-13 15:16:21,504 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_foo:Group_testCreateAndAssign' descriptor. 2023-07-13 15:16:21,505 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,506 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_foo:Group_testCreateAndAssign' from region states. 2023-07-13 15:16:21,506 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261381506"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:21,509 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:21,509 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 185fbf0c037aee0bfb8edc05f7a1645d, NAME => 'Group_foo:Group_testCreateAndAssign,,1689261380476.185fbf0c037aee0bfb8edc05f7a1645d.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:21,509 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_foo:Group_testCreateAndAssign' as deleted. 2023-07-13 15:16:21,509 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_foo:Group_testCreateAndAssign","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261381509"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:21,511 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_foo:Group_testCreateAndAssign state from META 2023-07-13 15:16:21,514 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:21,515 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign in 36 msec 2023-07-13 15:16:21,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-13 15:16:21,590 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_foo:Group_testCreateAndAssign, procId: 80 completed 2023-07-13 15:16:21,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-13 15:16:21,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:21,609 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:21,613 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:21,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 15:16:21,616 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:21,617 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-13 15:16:21,617 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:21,619 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:21,621 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=81, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:21,623 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 20 msec 2023-07-13 15:16:21,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-13 15:16:21,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:21,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:21,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:21,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:21,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:21,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:21,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:21,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:21,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:21,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:21,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:21,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:21,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:21,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:21,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34275] to rsgroup default 2023-07-13 15:16:21,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:21,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:21,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-13 15:16:21,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34275,1689261367926] are moved back to appInfo 2023-07-13 15:16:21,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-13 15:16:21,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:21,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup appInfo 2023-07-13 15:16:21,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:21,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:21,752 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:21,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:21,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:21,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:21,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:21,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:21,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:21,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:21,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:21,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 367 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262581767, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:21,768 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:21,770 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:21,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:21,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:21,772 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:21,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:21,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:21,796 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testNamespaceCreateAndAssign Thread=520 (was 506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294184884_17 at /127.0.0.1:59290 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294184884_17 at /127.0.0.1:59282 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-539595235_17 at /127.0.0.1:38672 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-539595235_17 at /127.0.0.1:49178 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294184884_17 at /127.0.0.1:42976 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-539595235_17 at /127.0.0.1:49194 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046-prefix:jenkins-hbase4.apache.org,36737,1689261368119.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5e7cb401-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294184884_17 at /127.0.0.1:49166 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-539595235_17 at /127.0.0.1:59092 [Waiting for operation #16] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1632709426_17 at /127.0.0.1:43022 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-539595235_17 at /127.0.0.1:49132 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294184884_17 at /127.0.0.1:43038 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=811 (was 787) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=459 (was 499), ProcessCount=172 (was 172), AvailableMemoryMB=4212 (was 4329) 2023-07-13 15:16:21,797 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-13 15:16:21,819 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=520, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=459, ProcessCount=172, AvailableMemoryMB=4210 2023-07-13 15:16:21,819 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=520 is superior to 500 2023-07-13 15:16:21,820 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testCreateAndDrop 2023-07-13 15:16:21,822 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:21,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:21,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:21,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:21,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:21,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:21,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:21,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:21,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:21,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:21,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:21,844 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:21,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:21,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:21,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:21,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:21,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:21,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:21,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:21,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:21,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 395 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262581864, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:21,865 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:21,869 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:21,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:21,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:21,870 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:21,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:21,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:21,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:21,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=82, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:21,877 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:21,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCreateAndDrop" procId is: 82 2023-07-13 15:16:21,879 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:21,880 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:21,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-13 15:16:21,880 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:21,883 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:21,887 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:21,887 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d empty. 2023-07-13 15:16:21,888 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:21,888 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-13 15:16:21,927 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:21,929 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7db48ce827c097354ccadbbc6e651c3d, NAME => 'Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCreateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:21,953 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:21,953 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1604): Closing 7db48ce827c097354ccadbbc6e651c3d, disabling compactions & flushes 2023-07-13 15:16:21,953 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:21,953 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:21,953 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. after waiting 0 ms 2023-07-13 15:16:21,953 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:21,953 INFO [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:21,953 DEBUG [RegionOpenAndInit-Group_testCreateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 7db48ce827c097354ccadbbc6e651c3d: 2023-07-13 15:16:21,956 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:21,957 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261381957"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261381957"}]},"ts":"1689261381957"} 2023-07-13 15:16:21,959 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:21,960 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:21,960 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261381960"}]},"ts":"1689261381960"} 2023-07-13 15:16:21,961 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLING in hbase:meta 2023-07-13 15:16:21,966 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:21,966 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:21,966 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:21,966 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:21,966 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 15:16:21,966 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:21,966 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=7db48ce827c097354ccadbbc6e651c3d, ASSIGN}] 2023-07-13 15:16:21,969 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=83, ppid=82, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=7db48ce827c097354ccadbbc6e651c3d, ASSIGN 2023-07-13 15:16:21,969 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=83, ppid=82, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=7db48ce827c097354ccadbbc6e651c3d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43693,1689261373307; forceNewPlan=false, retain=false 2023-07-13 15:16:21,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-13 15:16:22,120 INFO [jenkins-hbase4:38141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:22,121 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=83 updating hbase:meta row=7db48ce827c097354ccadbbc6e651c3d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:22,122 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261382121"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261382121"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261382121"}]},"ts":"1689261382121"} 2023-07-13 15:16:22,124 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=84, ppid=83, state=RUNNABLE; OpenRegionProcedure 7db48ce827c097354ccadbbc6e651c3d, server=jenkins-hbase4.apache.org,43693,1689261373307}] 2023-07-13 15:16:22,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-13 15:16:22,203 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 15:16:22,203 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 15:16:22,280 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:22,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7db48ce827c097354ccadbbc6e651c3d, NAME => 'Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:22,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCreateAndDrop 7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:22,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,282 INFO [StoreOpener-7db48ce827c097354ccadbbc6e651c3d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf of region 7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,284 DEBUG [StoreOpener-7db48ce827c097354ccadbbc6e651c3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d/cf 2023-07-13 15:16:22,284 DEBUG [StoreOpener-7db48ce827c097354ccadbbc6e651c3d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d/cf 2023-07-13 15:16:22,284 INFO [StoreOpener-7db48ce827c097354ccadbbc6e651c3d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7db48ce827c097354ccadbbc6e651c3d columnFamilyName cf 2023-07-13 15:16:22,285 INFO [StoreOpener-7db48ce827c097354ccadbbc6e651c3d-1] regionserver.HStore(310): Store=7db48ce827c097354ccadbbc6e651c3d/cf, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:22,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:22,291 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7db48ce827c097354ccadbbc6e651c3d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9639553120, jitterRate=-0.10224665701389313}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:22,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7db48ce827c097354ccadbbc6e651c3d: 2023-07-13 15:16:22,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d., pid=84, masterSystemTime=1689261382276 2023-07-13 15:16:22,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:22,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:22,294 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=83 updating hbase:meta row=7db48ce827c097354ccadbbc6e651c3d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:22,294 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261382294"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261382294"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261382294"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261382294"}]},"ts":"1689261382294"} 2023-07-13 15:16:22,298 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=84, resume processing ppid=83 2023-07-13 15:16:22,298 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, ppid=83, state=SUCCESS; OpenRegionProcedure 7db48ce827c097354ccadbbc6e651c3d, server=jenkins-hbase4.apache.org,43693,1689261373307 in 172 msec 2023-07-13 15:16:22,300 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-13 15:16:22,300 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=7db48ce827c097354ccadbbc6e651c3d, ASSIGN in 332 msec 2023-07-13 15:16:22,301 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:22,301 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261382301"}]},"ts":"1689261382301"} 2023-07-13 15:16:22,302 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=ENABLED in hbase:meta 2023-07-13 15:16:22,304 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=82, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCreateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:22,306 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=82, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop in 431 msec 2023-07-13 15:16:22,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=82 2023-07-13 15:16:22,483 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCreateAndDrop, procId: 82 completed 2023-07-13 15:16:22,484 DEBUG [Listener at localhost/35161] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCreateAndDrop get assigned. Timeout = 60000ms 2023-07-13 15:16:22,484 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:22,485 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34275] ipc.CallRunner(144): callId: 410 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:52216 deadline: 1689261442485, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=36737 startCode=1689261368119. As of locationSeqNum=79. 2023-07-13 15:16:22,588 DEBUG [hconnection-0x50aa0278-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:22,590 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34192, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:22,593 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(3484): All regions for table Group_testCreateAndDrop assigned to meta. Checking AM states. 2023-07-13 15:16:22,594 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:22,594 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(3504): All regions for table Group_testCreateAndDrop assigned. 2023-07-13 15:16:22,594 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:22,598 INFO [Listener at localhost/35161] client.HBaseAdmin$15(890): Started disable of Group_testCreateAndDrop 2023-07-13 15:16:22,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCreateAndDrop 2023-07-13 15:16:22,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=85, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:22,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-13 15:16:22,602 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261382602"}]},"ts":"1689261382602"} 2023-07-13 15:16:22,604 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLING in hbase:meta 2023-07-13 15:16:22,606 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCreateAndDrop to state=DISABLING 2023-07-13 15:16:22,606 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=7db48ce827c097354ccadbbc6e651c3d, UNASSIGN}] 2023-07-13 15:16:22,609 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=86, ppid=85, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=7db48ce827c097354ccadbbc6e651c3d, UNASSIGN 2023-07-13 15:16:22,609 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=7db48ce827c097354ccadbbc6e651c3d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:22,610 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261382609"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261382609"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261382609"}]},"ts":"1689261382609"} 2023-07-13 15:16:22,611 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=87, ppid=86, state=RUNNABLE; CloseRegionProcedure 7db48ce827c097354ccadbbc6e651c3d, server=jenkins-hbase4.apache.org,43693,1689261373307}] 2023-07-13 15:16:22,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-13 15:16:22,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7db48ce827c097354ccadbbc6e651c3d, disabling compactions & flushes 2023-07-13 15:16:22,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:22,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:22,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. after waiting 0 ms 2023-07-13 15:16:22,764 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:22,768 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:22,769 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d. 2023-07-13 15:16:22,769 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7db48ce827c097354ccadbbc6e651c3d: 2023-07-13 15:16:22,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,771 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=86 updating hbase:meta row=7db48ce827c097354ccadbbc6e651c3d, regionState=CLOSED 2023-07-13 15:16:22,771 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261382771"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261382771"}]},"ts":"1689261382771"} 2023-07-13 15:16:22,774 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=87, resume processing ppid=86 2023-07-13 15:16:22,774 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=87, ppid=86, state=SUCCESS; CloseRegionProcedure 7db48ce827c097354ccadbbc6e651c3d, server=jenkins-hbase4.apache.org,43693,1689261373307 in 161 msec 2023-07-13 15:16:22,775 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-13 15:16:22,775 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; TransitRegionStateProcedure table=Group_testCreateAndDrop, region=7db48ce827c097354ccadbbc6e651c3d, UNASSIGN in 168 msec 2023-07-13 15:16:22,776 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261382776"}]},"ts":"1689261382776"} 2023-07-13 15:16:22,777 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCreateAndDrop, state=DISABLED in hbase:meta 2023-07-13 15:16:22,779 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCreateAndDrop to state=DISABLED 2023-07-13 15:16:22,781 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop in 181 msec 2023-07-13 15:16:22,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=85 2023-07-13 15:16:22,904 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCreateAndDrop, procId: 85 completed 2023-07-13 15:16:22,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCreateAndDrop 2023-07-13 15:16:22,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=88, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:22,908 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=88, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:22,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCreateAndDrop' from rsgroup 'default' 2023-07-13 15:16:22,909 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=88, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:22,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:22,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:22,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:22,913 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,915 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d/cf, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d/recovered.edits] 2023-07-13 15:16:22,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-13 15:16:22,920 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d/recovered.edits/4.seqid 2023-07-13 15:16:22,920 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCreateAndDrop/7db48ce827c097354ccadbbc6e651c3d 2023-07-13 15:16:22,921 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testCreateAndDrop regions 2023-07-13 15:16:22,923 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=88, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:22,927 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCreateAndDrop from hbase:meta 2023-07-13 15:16:22,928 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testCreateAndDrop' descriptor. 2023-07-13 15:16:22,929 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=88, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:22,929 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testCreateAndDrop' from region states. 2023-07-13 15:16:22,930 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261382929"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:22,931 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:22,931 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7db48ce827c097354ccadbbc6e651c3d, NAME => 'Group_testCreateAndDrop,,1689261381873.7db48ce827c097354ccadbbc6e651c3d.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:22,931 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testCreateAndDrop' as deleted. 2023-07-13 15:16:22,931 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCreateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261382931"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:22,932 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testCreateAndDrop state from META 2023-07-13 15:16:22,934 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=88, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:22,935 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=88, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop in 29 msec 2023-07-13 15:16:23,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=88 2023-07-13 15:16:23,019 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCreateAndDrop, procId: 88 completed 2023-07-13 15:16:23,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:23,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:23,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:23,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:23,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:23,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:23,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:23,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:23,035 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:23,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:23,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:23,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:23,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:23,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:23,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 455 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262583047, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:23,048 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:23,052 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:23,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,054 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:23,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:23,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:23,076 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateAndDrop Thread=522 (was 520) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-539595235_17 at /127.0.0.1:35368 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-539595235_17 at /127.0.0.1:38672 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50aa0278-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=810 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=459 (was 459), ProcessCount=172 (was 172), AvailableMemoryMB=4186 (was 4210) 2023-07-13 15:16:23,076 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-13 15:16:23,097 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=522, OpenFileDescriptor=810, MaxFileDescriptor=60000, SystemLoadAverage=459, ProcessCount=172, AvailableMemoryMB=4183 2023-07-13 15:16:23,097 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-13 15:16:23,097 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testCloneSnapshot 2023-07-13 15:16:23,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:23,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:23,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:23,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:23,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:23,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:23,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:23,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:23,118 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:23,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:23,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:23,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:23,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:23,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:23,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 483 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262583135, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:23,136 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:23,138 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:23,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:23,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:23,139 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:23,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:23,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:23,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:23,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=89, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:23,145 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:23,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testCloneSnapshot" procId is: 89 2023-07-13 15:16:23,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=89 2023-07-13 15:16:23,146 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:23,147 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:23,147 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:23,149 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:23,151 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:23,152 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2 empty. 2023-07-13 15:16:23,152 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:23,152 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-13 15:16:23,176 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:23,178 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => cc51566ecf280d9e1130becdb331b0b2, NAME => 'Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:23,192 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,192 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1604): Closing cc51566ecf280d9e1130becdb331b0b2, disabling compactions & flushes 2023-07-13 15:16:23,192 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:23,192 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:23,192 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. after waiting 0 ms 2023-07-13 15:16:23,192 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:23,192 INFO [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:23,192 DEBUG [RegionOpenAndInit-Group_testCloneSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for cc51566ecf280d9e1130becdb331b0b2: 2023-07-13 15:16:23,195 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:23,196 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261383196"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261383196"}]},"ts":"1689261383196"} 2023-07-13 15:16:23,198 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:23,199 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:23,199 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261383199"}]},"ts":"1689261383199"} 2023-07-13 15:16:23,200 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLING in hbase:meta 2023-07-13 15:16:23,206 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:23,206 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:23,206 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:23,206 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:23,206 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 15:16:23,206 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:23,206 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=90, ppid=89, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=cc51566ecf280d9e1130becdb331b0b2, ASSIGN}] 2023-07-13 15:16:23,209 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, ppid=89, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=cc51566ecf280d9e1130becdb331b0b2, ASSIGN 2023-07-13 15:16:23,210 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=90, ppid=89, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=cc51566ecf280d9e1130becdb331b0b2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36737,1689261368119; forceNewPlan=false, retain=false 2023-07-13 15:16:23,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=89 2023-07-13 15:16:23,360 INFO [jenkins-hbase4:38141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:23,362 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=cc51566ecf280d9e1130becdb331b0b2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,362 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261383362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261383362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261383362"}]},"ts":"1689261383362"} 2023-07-13 15:16:23,367 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; OpenRegionProcedure cc51566ecf280d9e1130becdb331b0b2, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:23,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=89 2023-07-13 15:16:23,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:23,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc51566ecf280d9e1130becdb331b0b2, NAME => 'Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:23,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:23,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:23,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:23,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:23,527 INFO [StoreOpener-cc51566ecf280d9e1130becdb331b0b2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:23,529 DEBUG [StoreOpener-cc51566ecf280d9e1130becdb331b0b2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2/test 2023-07-13 15:16:23,529 DEBUG [StoreOpener-cc51566ecf280d9e1130becdb331b0b2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2/test 2023-07-13 15:16:23,530 INFO [StoreOpener-cc51566ecf280d9e1130becdb331b0b2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc51566ecf280d9e1130becdb331b0b2 columnFamilyName test 2023-07-13 15:16:23,530 INFO [StoreOpener-cc51566ecf280d9e1130becdb331b0b2-1] regionserver.HStore(310): Store=cc51566ecf280d9e1130becdb331b0b2/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:23,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:23,532 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:23,536 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:23,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:23,539 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc51566ecf280d9e1130becdb331b0b2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9507195520, jitterRate=-0.1145734190940857}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:23,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc51566ecf280d9e1130becdb331b0b2: 2023-07-13 15:16:23,540 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2., pid=91, masterSystemTime=1689261383520 2023-07-13 15:16:23,542 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:23,542 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:23,543 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=cc51566ecf280d9e1130becdb331b0b2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,543 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261383543"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261383543"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261383543"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261383543"}]},"ts":"1689261383543"} 2023-07-13 15:16:23,552 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-13 15:16:23,552 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; OpenRegionProcedure cc51566ecf280d9e1130becdb331b0b2, server=jenkins-hbase4.apache.org,36737,1689261368119 in 181 msec 2023-07-13 15:16:23,554 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=90, resume processing ppid=89 2023-07-13 15:16:23,554 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, ppid=89, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=cc51566ecf280d9e1130becdb331b0b2, ASSIGN in 346 msec 2023-07-13 15:16:23,555 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:23,555 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261383555"}]},"ts":"1689261383555"} 2023-07-13 15:16:23,557 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=ENABLED in hbase:meta 2023-07-13 15:16:23,559 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=89, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testCloneSnapshot execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:23,561 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot in 417 msec 2023-07-13 15:16:23,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=89 2023-07-13 15:16:23,750 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testCloneSnapshot, procId: 89 completed 2023-07-13 15:16:23,750 DEBUG [Listener at localhost/35161] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testCloneSnapshot get assigned. Timeout = 60000ms 2023-07-13 15:16:23,750 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:23,755 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(3484): All regions for table Group_testCloneSnapshot assigned to meta. Checking AM states. 2023-07-13 15:16:23,755 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:23,755 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(3504): All regions for table Group_testCloneSnapshot assigned. 2023-07-13 15:16:23,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1583): Client=jenkins//172.31.14.131 snapshot request for:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-13 15:16:23,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] snapshot.SnapshotDescriptionUtils(316): Creation time not specified, setting to:1689261383767 (current time:1689261383767). 2023-07-13 15:16:23,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] snapshot.SnapshotDescriptionUtils(332): Snapshot current TTL value: 0 resetting it to default value: 0 2023-07-13 15:16:23,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] zookeeper.ReadOnlyZKClient(139): Connect 0x138398e9 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:23,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d5f1b82, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:23,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:23,782 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34208, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:23,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x138398e9 to 127.0.0.1:56695 2023-07-13 15:16:23,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:23,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] snapshot.SnapshotManager(601): No existing snapshot, attempting snapshot... 2023-07-13 15:16:23,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] snapshot.SnapshotManager(648): Table enabled, starting distributed snapshots for { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-13 15:16:23,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=92, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-13 15:16:23,811 DEBUG [PEWorker-3] locking.LockProcedure(309): LOCKED pid=92, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-13 15:16:23,812 INFO [PEWorker-3] procedure2.TimeoutExecutorThread(81): ADDED pid=92, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE; timeout=600000, timestamp=1689261983812 2023-07-13 15:16:23,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] snapshot.SnapshotManager(653): Started snapshot: { ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } 2023-07-13 15:16:23,812 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(174): Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot 2023-07-13 15:16:23,814 DEBUG [PEWorker-1] locking.LockProcedure(242): UNLOCKED pid=92, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-13 15:16:23,815 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-13 15:16:23,816 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=92, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE in 14 msec 2023-07-13 15:16:23,816 DEBUG [PEWorker-1] locking.LockProcedure(309): LOCKED pid=93, state=RUNNABLE; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-13 15:16:23,817 INFO [PEWorker-1] procedure2.TimeoutExecutorThread(81): ADDED pid=93, state=WAITING_TIMEOUT, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED; timeout=600000, timestamp=1689261983817 2023-07-13 15:16:23,820 DEBUG [Listener at localhost/35161] client.HBaseAdmin(2418): Waiting a max of 300000 ms for snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }'' to complete. (max 20000 ms per retry) 2023-07-13 15:16:23,820 DEBUG [Listener at localhost/35161] client.HBaseAdmin(2428): (#1) Sleeping: 100ms while waiting for snapshot completion. 2023-07-13 15:16:23,843 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] procedure.ProcedureCoordinator(165): Submitting procedure Group_testCloneSnapshot_snap 2023-07-13 15:16:23,843 INFO [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.Procedure(191): Starting procedure 'Group_testCloneSnapshot_snap' 2023-07-13 15:16:23,844 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-13 15:16:23,844 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.Procedure(199): Procedure 'Group_testCloneSnapshot_snap' starting 'acquire' 2023-07-13 15:16:23,844 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.Procedure(241): Starting procedure 'Group_testCloneSnapshot_snap', kicking off acquire phase on members. 2023-07-13 15:16:23,845 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,845 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(92): Creating acquire znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,846 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-13 15:16:23,846 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-13 15:16:23,846 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(100): Watching for acquire node:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,846 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-13 15:16:23,846 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:23,846 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-13 15:16:23,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:23,846 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-13 15:16:23,847 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-13 15:16:23,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:23,847 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,847 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.Procedure(203): Waiting for all members to 'acquire' 2023-07-13 15:16:23,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,847 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,847 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-13 15:16:23,848 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-13 15:16:23,848 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-13 15:16:23,849 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-13 15:16:23,849 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:23,849 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(186): Found procedure znode: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,849 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,850 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(212): start proc data length is 72 2023-07-13 15:16:23,850 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(214): Found data for znode:/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,850 DEBUG [zk-event-processor-pool-0] snapshot.RegionServerSnapshotManager(175): Launching subprocedure for snapshot Group_testCloneSnapshot_snap from table Group_testCloneSnapshot type FLUSH 2023-07-13 15:16:23,851 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-13 15:16:23,851 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-13 15:16:23,851 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-13 15:16:23,851 DEBUG [zk-event-processor-pool-0] procedure.ProcedureMember(140): Submitting new Subprocedure:Group_testCloneSnapshot_snap 2023-07-13 15:16:23,851 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-13 15:16:23,851 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-13 15:16:23,851 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-13 15:16:23,851 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-13 15:16:23,851 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.Subprocedure(151): Starting subprocedure 'Group_testCloneSnapshot_snap' with timeout 300000ms 2023-07-13 15:16:23,852 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-13 15:16:23,852 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-13 15:16:23,853 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-13 15:16:23,852 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-13 15:16:23,851 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(107): Scheduling process timer to run in: 300000 ms 2023-07-13 15:16:23,853 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-13 15:16:23,854 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-13 15:16:23,853 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-13 15:16:23,854 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,43693,1689261373307' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-13 15:16:23,853 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-13 15:16:23,854 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,36737,1689261368119' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-13 15:16:23,854 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.Subprocedure(159): Subprocedure 'Group_testCloneSnapshot_snap' starting 'acquire' stage 2023-07-13 15:16:23,854 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.Subprocedure(161): Subprocedure 'Group_testCloneSnapshot_snap' locally acquired 2023-07-13 15:16:23,854 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,41955,1689261371593' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-13 15:16:23,854 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(242): Member: 'jenkins-hbase4.apache.org,34275,1689261367926' joining acquired barrier for procedure (Group_testCloneSnapshot_snap) in zk 2023-07-13 15:16:23,855 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,860 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,860 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,861 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,861 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(250): Watch for global barrier reached:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,860 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,861 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-13 15:16:23,861 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,863 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-13 15:16:23,861 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,861 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] zookeeper.ZKUtil(164): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,863 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-13 15:16:23,861 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,863 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.Subprocedure(166): Subprocedure 'Group_testCloneSnapshot_snap' coordinator notified of 'acquire', waiting on 'reached' or 'abort' from coordinator 2023-07-13 15:16:23,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-13 15:16:23,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-13 15:16:23,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-13 15:16:23,864 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-13 15:16:23,865 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-13 15:16:23,865 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:23,865 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:23,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:23,866 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-13 15:16:23,867 DEBUG [zk-event-processor-pool-0] procedure.Procedure(291): member: 'jenkins-hbase4.apache.org,36737,1689261368119' joining acquired barrier for procedure 'Group_testCloneSnapshot_snap' on coordinator 2023-07-13 15:16:23,867 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.Procedure(207): Procedure 'Group_testCloneSnapshot_snap' starting 'in-barrier' execution. 2023-07-13 15:16:23,867 DEBUG [zk-event-processor-pool-0] procedure.Procedure(300): Waiting on: java.util.concurrent.CountDownLatch@6b1243ab[Count = 0] remaining members to acquire global barrier 2023-07-13 15:16:23,867 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(116): Creating reached barrier zk node:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,868 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,868 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,868 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,868 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,869 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,869 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,869 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.Procedure(211): Waiting for all members to 'release' 2023-07-13 15:16:23,868 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-13 15:16:23,868 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-13 15:16:23,869 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,34275,1689261367926' in zk 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,43693,1689261373307' in zk 2023-07-13 15:16:23,869 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(128): Received reached global barrier:/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.Subprocedure(180): Subprocedure 'Group_testCloneSnapshot_snap' received 'reached' from coordinator. 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-13 15:16:23,869 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,41955,1689261371593' in zk 2023-07-13 15:16:23,870 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] snapshot.FlushSnapshotSubprocedure(170): Flush Snapshot Tasks submitted for 1 regions 2023-07-13 15:16:23,870 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(301): Waiting for local region snapshots to finish. 2023-07-13 15:16:23,870 DEBUG [rs(jenkins-hbase4.apache.org,36737,1689261368119)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(97): Starting snapshot operation on Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:23,871 DEBUG [rs(jenkins-hbase4.apache.org,36737,1689261368119)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(110): Flush Snapshotting region Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. started... 2023-07-13 15:16:23,872 DEBUG [rs(jenkins-hbase4.apache.org,36737,1689261368119)-snapshot-pool-0] regionserver.HRegion(2446): Flush status journal for cc51566ecf280d9e1130becdb331b0b2: 2023-07-13 15:16:23,872 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-13 15:16:23,872 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-13 15:16:23,872 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-13 15:16:23,872 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-13 15:16:23,872 DEBUG [member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-13 15:16:23,872 DEBUG [member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-13 15:16:23,873 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-13 15:16:23,873 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-13 15:16:23,873 DEBUG [member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-13 15:16:23,875 DEBUG [rs(jenkins-hbase4.apache.org,36737,1689261368119)-snapshot-pool-0] snapshot.SnapshotManifest(238): Storing 'Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.' region-info for snapshot=Group_testCloneSnapshot_snap 2023-07-13 15:16:23,882 DEBUG [rs(jenkins-hbase4.apache.org,36737,1689261368119)-snapshot-pool-0] snapshot.SnapshotManifest(243): Creating references for hfiles 2023-07-13 15:16:23,888 DEBUG [rs(jenkins-hbase4.apache.org,36737,1689261368119)-snapshot-pool-0] snapshot.SnapshotManifest(253): Adding snapshot references for [] hfiles 2023-07-13 15:16:23,920 DEBUG [Listener at localhost/35161] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-13 15:16:23,922 DEBUG [rs(jenkins-hbase4.apache.org,36737,1689261368119)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(137): ... Flush Snapshotting region Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. completed. 2023-07-13 15:16:23,922 DEBUG [rs(jenkins-hbase4.apache.org,36737,1689261368119)-snapshot-pool-0] snapshot.FlushSnapshotSubprocedure$RegionSnapshotTask(140): Closing snapshot operation on Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:23,923 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(312): Completed 1/1 local region snapshots. 2023-07-13 15:16:23,923 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(314): Completed 1 local region snapshots. 2023-07-13 15:16:23,923 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] snapshot.RegionServerSnapshotManager$SnapshotSubprocedurePool(345): cancelling 0 tasks for snapshot jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,923 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.Subprocedure(182): Subprocedure 'Group_testCloneSnapshot_snap' locally completed 2023-07-13 15:16:23,923 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.ZKProcedureMemberRpcs(267): Marking procedure 'Group_testCloneSnapshot_snap' completed for member 'jenkins-hbase4.apache.org,36737,1689261368119' in zk 2023-07-13 15:16:23,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-13 15:16:23,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] snapshot.SnapshotManager(404): Snapshoting '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' is still in progress! 2023-07-13 15:16:23,926 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.Subprocedure(187): Subprocedure 'Group_testCloneSnapshot_snap' has notified controller of completion 2023-07-13 15:16:23,926 DEBUG [Listener at localhost/35161] client.HBaseAdmin(2428): (#2) Sleeping: 200ms while waiting for snapshot completion. 2023-07-13 15:16:23,926 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-13 15:16:23,926 DEBUG [member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0] procedure.Subprocedure(212): Subprocedure 'Group_testCloneSnapshot_snap' completed. 2023-07-13 15:16:23,926 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-13 15:16:23,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-13 15:16:23,928 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-13 15:16:23,929 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-13 15:16:23,929 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-13 15:16:23,930 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:23,930 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,930 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:23,931 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:23,931 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-13 15:16:23,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-13 15:16:23,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:23,932 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,933 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:23,933 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:23,934 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(218): Finished data from procedure 'Group_testCloneSnapshot_snap' member 'jenkins-hbase4.apache.org,36737,1689261368119': 2023-07-13 15:16:23,934 DEBUG [zk-event-processor-pool-0] procedure.Procedure(321): Member: 'jenkins-hbase4.apache.org,36737,1689261368119' released barrier for procedure'Group_testCloneSnapshot_snap', counting down latch. Waiting for 0 more 2023-07-13 15:16:23,934 INFO [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.Procedure(216): Procedure 'Group_testCloneSnapshot_snap' execution completed 2023-07-13 15:16:23,934 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.Procedure(225): Running finish phase. 2023-07-13 15:16:23,934 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.Procedure(275): Finished coordinator procedure - removing self from list of running procedures 2023-07-13 15:16:23,934 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.ZKProcedureCoordinator(162): Attempting to clean out zk node for op:Group_testCloneSnapshot_snap 2023-07-13 15:16:23,934 INFO [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] procedure.ZKProcedureUtil(265): Clearing all znodes for procedure Group_testCloneSnapshot_snapincluding nodes /hbase/online-snapshot/acquired /hbase/online-snapshot/reached /hbase/online-snapshot/abort 2023-07-13 15:16:23,936 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,936 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,936 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-13 15:16:23,936 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-13 15:16:23,936 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,936 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,937 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,936 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,936 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureCoordinator$1(194): Node created: /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,937 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,937 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-13 15:16:23,937 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(77): Received created event:/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(225): Current zk system: 2023-07-13 15:16:23,937 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(227): |-/hbase/online-snapshot 2023-07-13 15:16:23,937 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-13 15:16:23,939 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-13 15:16:23,939 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-13 15:16:23,939 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-13 15:16:23,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:23,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:23,939 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-13 15:16:23,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:23,939 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:23,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,940 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,940 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,940 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:23,940 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:23,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:23,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,939 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-abort 2023-07-13 15:16:23,941 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(316): Aborting procedure member for znode /hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-13 15:16:23,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-acquired 2023-07-13 15:16:23,942 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-13 15:16:23,943 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:23,943 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:23,943 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,944 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,944 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:23,944 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:23,944 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:23,945 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:23,945 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-reached 2023-07-13 15:16:23,945 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |----Group_testCloneSnapshot_snap 2023-07-13 15:16:23,946 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:23,946 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,946 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:23,947 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureUtil(244): |-------jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-13 15:16:23,950 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-13 15:16:23,950 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-13 15:16:23,950 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:23,950 DEBUG [(jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0] errorhandling.TimeoutExceptionInjector(87): Marking timer as complete - no error notifications will be received for this timer. 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,951 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.EnabledTableSnapshotHandler(97): Done waiting - online snapshot for Group_testCloneSnapshot_snap 2023-07-13 15:16:23,950 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-13 15:16:23,952 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotManifest(484): Convert to Single Snapshot Manifest for Group_testCloneSnapshot_snap 2023-07-13 15:16:23,950 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/acquired 2023-07-13 15:16:23,950 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-13 15:16:23,952 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(104): Received procedure start children changed event: /hbase/online-snapshot/acquired 2023-07-13 15:16:23,952 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-13 15:16:23,952 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:23,952 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:23,952 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/abort 2023-07-13 15:16:23,952 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:23,951 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:23,953 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/acquired/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,953 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:23,953 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-13 15:16:23,953 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:23,953 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,953 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:23,953 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-13 15:16:23,953 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:23,953 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:23,953 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:23,953 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/reached/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,953 INFO [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs$1(107): Received procedure abort children changed event: /hbase/online-snapshot/abort 2023-07-13 15:16:23,953 DEBUG [zk-event-processor-pool-0] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:23,953 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/online-snapshot/abort/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,955 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotManifestV1(126): No regions under directory:hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-13 15:16:23,990 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.SnapshotDescriptionUtils(404): Sentinel is done, just moving the snapshot from hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.hbase-snapshot/Group_testCloneSnapshot_snap 2023-07-13 15:16:24,037 INFO [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(229): Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed 2023-07-13 15:16:24,037 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(246): Launching cleanup of working dir:hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-13 15:16:24,038 ERROR [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(251): Couldn't delete snapshot working directory:hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.hbase-snapshot/.tmp/Group_testCloneSnapshot_snap 2023-07-13 15:16:24,038 DEBUG [MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0] snapshot.TakeSnapshotHandler(257): Table snapshot journal : Running FLUSH table snapshot Group_testCloneSnapshot_snap C_M_SNAPSHOT_TABLE on table Group_testCloneSnapshot at 1689261383812Consolidate snapshot: Group_testCloneSnapshot_snap at 1689261383952 (+140 ms)Loading Region manifests for Group_testCloneSnapshot_snap at 1689261383952Writing data manifest for Group_testCloneSnapshot_snap at 1689261383964 (+12 ms)Verifying snapshot: Group_testCloneSnapshot_snap at 1689261383979 (+15 ms)Snapshot Group_testCloneSnapshot_snap of table Group_testCloneSnapshot completed at 1689261384037 (+58 ms) 2023-07-13 15:16:24,039 DEBUG [PEWorker-5] locking.LockProcedure(242): UNLOCKED pid=93, state=RUNNABLE, locked=true; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-13 15:16:24,041 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED in 224 msec 2023-07-13 15:16:24,126 DEBUG [Listener at localhost/35161] client.HBaseAdmin(2434): Getting current status of snapshot from master... 2023-07-13 15:16:24,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1212): Checking to see if snapshot from request:{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 } is done 2023-07-13 15:16:24,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] snapshot.SnapshotManager(401): Snapshot '{ ss=Group_testCloneSnapshot_snap table=Group_testCloneSnapshot type=FLUSH ttl=0 }' has completed, notifying client. 2023-07-13 15:16:24,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint(486): Pre-moving table Group_testCloneSnapshot_clone to RSGroup default 2023-07-13 15:16:24,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:24,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:24,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:24,146 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(742): TableDescriptor of table {} not found. Skipping the region movement of this table. 2023-07-13 15:16:24,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CLONE_SNAPSHOT_PRE_OPERATION; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689261383767 type: FLUSH version: 2 ttl: 0 ) 2023-07-13 15:16:24,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] snapshot.SnapshotManager(750): Clone snapshot=Group_testCloneSnapshot_snap as table=Group_testCloneSnapshot_clone 2023-07-13 15:16:24,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 15:16:24,182 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot_clone/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:24,188 INFO [PEWorker-4] snapshot.RestoreSnapshotHelper(177): starting restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689261383767 type: FLUSH version: 2 ttl: 0 2023-07-13 15:16:24,188 DEBUG [PEWorker-4] snapshot.RestoreSnapshotHelper(785): get table regions: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot_clone 2023-07-13 15:16:24,189 INFO [PEWorker-4] snapshot.RestoreSnapshotHelper(239): region to add: cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:24,189 INFO [PEWorker-4] snapshot.RestoreSnapshotHelper(585): clone region=cc51566ecf280d9e1130becdb331b0b2 as 76fec091d7d06a4c933b44e27d0afad6 in snapshot Group_testCloneSnapshot_snap 2023-07-13 15:16:24,191 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(7675): creating {ENCODED => 76fec091d7d06a4c933b44e27d0afad6, NAME => 'Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testCloneSnapshot_clone', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'test', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:24,202 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:24,203 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1604): Closing 76fec091d7d06a4c933b44e27d0afad6, disabling compactions & flushes 2023-07-13 15:16:24,203 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:24,203 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:24,203 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. after waiting 0 ms 2023-07-13 15:16:24,203 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:24,203 INFO [RestoreSnapshot-pool-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:24,203 DEBUG [RestoreSnapshot-pool-0] regionserver.HRegion(1558): Region close journal for 76fec091d7d06a4c933b44e27d0afad6: 2023-07-13 15:16:24,203 INFO [PEWorker-4] snapshot.RestoreSnapshotHelper(266): finishing restore table regions using snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689261383767 type: FLUSH version: 2 ttl: 0 2023-07-13 15:16:24,203 INFO [PEWorker-4] procedure.CloneSnapshotProcedure$1(421): Clone snapshot=Group_testCloneSnapshot_snap on table=Group_testCloneSnapshot_clone completed! 2023-07-13 15:16:24,207 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689261384207"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261384207"}]},"ts":"1689261384207"} 2023-07-13 15:16:24,209 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:24,210 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261384210"}]},"ts":"1689261384210"} 2023-07-13 15:16:24,211 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLING in hbase:meta 2023-07-13 15:16:24,216 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:24,216 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:24,216 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:24,216 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:24,216 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 15:16:24,216 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:24,216 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=76fec091d7d06a4c933b44e27d0afad6, ASSIGN}] 2023-07-13 15:16:24,218 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=76fec091d7d06a4c933b44e27d0afad6, ASSIGN 2023-07-13 15:16:24,219 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=76fec091d7d06a4c933b44e27d0afad6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34275,1689261367926; forceNewPlan=false, retain=false 2023-07-13 15:16:24,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 15:16:24,369 INFO [jenkins-hbase4:38141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:24,371 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=76fec091d7d06a4c933b44e27d0afad6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:24,371 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689261384371"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261384371"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261384371"}]},"ts":"1689261384371"} 2023-07-13 15:16:24,373 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 76fec091d7d06a4c933b44e27d0afad6, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:24,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 15:16:24,528 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:24,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 76fec091d7d06a4c933b44e27d0afad6, NAME => 'Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:24,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testCloneSnapshot_clone 76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:24,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:24,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:24,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:24,531 INFO [StoreOpener-76fec091d7d06a4c933b44e27d0afad6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family test of region 76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:24,533 DEBUG [StoreOpener-76fec091d7d06a4c933b44e27d0afad6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6/test 2023-07-13 15:16:24,533 DEBUG [StoreOpener-76fec091d7d06a4c933b44e27d0afad6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6/test 2023-07-13 15:16:24,533 INFO [StoreOpener-76fec091d7d06a4c933b44e27d0afad6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 76fec091d7d06a4c933b44e27d0afad6 columnFamilyName test 2023-07-13 15:16:24,534 INFO [StoreOpener-76fec091d7d06a4c933b44e27d0afad6-1] regionserver.HStore(310): Store=76fec091d7d06a4c933b44e27d0afad6/test, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:24,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:24,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:24,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:24,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:24,541 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 76fec091d7d06a4c933b44e27d0afad6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11036402240, jitterRate=0.027845054864883423}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:24,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 76fec091d7d06a4c933b44e27d0afad6: 2023-07-13 15:16:24,542 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6., pid=96, masterSystemTime=1689261384524 2023-07-13 15:16:24,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:24,543 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:24,544 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=76fec091d7d06a4c933b44e27d0afad6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:24,544 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689261384544"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261384544"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261384544"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261384544"}]},"ts":"1689261384544"} 2023-07-13 15:16:24,548 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-13 15:16:24,548 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 76fec091d7d06a4c933b44e27d0afad6, server=jenkins-hbase4.apache.org,34275,1689261367926 in 173 msec 2023-07-13 15:16:24,550 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-13 15:16:24,551 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=76fec091d7d06a4c933b44e27d0afad6, ASSIGN in 332 msec 2023-07-13 15:16:24,555 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261384555"}]},"ts":"1689261384555"} 2023-07-13 15:16:24,557 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=ENABLED in hbase:meta 2023-07-13 15:16:24,562 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689261383767 type: FLUSH version: 2 ttl: 0 ) in 410 msec 2023-07-13 15:16:24,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-13 15:16:24,769 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: MODIFY, Table Name: default:Group_testCloneSnapshot_clone, procId: 94 completed 2023-07-13 15:16:24,779 INFO [Listener at localhost/35161] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot 2023-07-13 15:16:24,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCloneSnapshot 2023-07-13 15:16:24,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:24,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 15:16:24,786 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261384786"}]},"ts":"1689261384786"} 2023-07-13 15:16:24,788 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLING in hbase:meta 2023-07-13 15:16:24,790 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot to state=DISABLING 2023-07-13 15:16:24,791 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=cc51566ecf280d9e1130becdb331b0b2, UNASSIGN}] 2023-07-13 15:16:24,797 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=cc51566ecf280d9e1130becdb331b0b2, UNASSIGN 2023-07-13 15:16:24,799 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=cc51566ecf280d9e1130becdb331b0b2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:24,799 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261384799"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261384799"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261384799"}]},"ts":"1689261384799"} 2023-07-13 15:16:24,801 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; CloseRegionProcedure cc51566ecf280d9e1130becdb331b0b2, server=jenkins-hbase4.apache.org,36737,1689261368119}] 2023-07-13 15:16:24,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 15:16:24,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:24,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc51566ecf280d9e1130becdb331b0b2, disabling compactions & flushes 2023-07-13 15:16:24,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:24,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:24,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. after waiting 0 ms 2023-07-13 15:16:24,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:24,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2/recovered.edits/5.seqid, newMaxSeqId=5, maxSeqId=1 2023-07-13 15:16:24,960 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2. 2023-07-13 15:16:24,960 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc51566ecf280d9e1130becdb331b0b2: 2023-07-13 15:16:24,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:24,966 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=cc51566ecf280d9e1130becdb331b0b2, regionState=CLOSED 2023-07-13 15:16:24,966 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.","families":{"info":[{"qualifier":"regioninfo","vlen":57,"tag":[],"timestamp":"1689261384966"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261384966"}]},"ts":"1689261384966"} 2023-07-13 15:16:24,972 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-13 15:16:24,972 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; CloseRegionProcedure cc51566ecf280d9e1130becdb331b0b2, server=jenkins-hbase4.apache.org,36737,1689261368119 in 167 msec 2023-07-13 15:16:24,974 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-13 15:16:24,974 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot, region=cc51566ecf280d9e1130becdb331b0b2, UNASSIGN in 181 msec 2023-07-13 15:16:24,975 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261384975"}]},"ts":"1689261384975"} 2023-07-13 15:16:24,976 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot, state=DISABLED in hbase:meta 2023-07-13 15:16:24,982 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot to state=DISABLED 2023-07-13 15:16:24,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot in 203 msec 2023-07-13 15:16:25,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-13 15:16:25,089 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot, procId: 97 completed 2023-07-13 15:16:25,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCloneSnapshot 2023-07-13 15:16:25,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:25,093 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=100, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:25,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot' from rsgroup 'default' 2023-07-13 15:16:25,095 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=100, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:25,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:25,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:25,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:25,099 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:25,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-13 15:16:25,102 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2/recovered.edits, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2/test] 2023-07-13 15:16:25,108 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2/recovered.edits/5.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2/recovered.edits/5.seqid 2023-07-13 15:16:25,110 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot/cc51566ecf280d9e1130becdb331b0b2 2023-07-13 15:16:25,110 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot regions 2023-07-13 15:16:25,112 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=100, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:25,115 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot from hbase:meta 2023-07-13 15:16:25,116 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot' descriptor. 2023-07-13 15:16:25,117 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=100, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:25,117 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot' from region states. 2023-07-13 15:16:25,117 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261385117"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:25,119 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:25,119 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cc51566ecf280d9e1130becdb331b0b2, NAME => 'Group_testCloneSnapshot,,1689261383142.cc51566ecf280d9e1130becdb331b0b2.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:25,119 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot' as deleted. 2023-07-13 15:16:25,119 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261385119"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:25,121 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot state from META 2023-07-13 15:16:25,123 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=100, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:25,124 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot in 33 msec 2023-07-13 15:16:25,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-13 15:16:25,202 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot, procId: 100 completed 2023-07-13 15:16:25,203 INFO [Listener at localhost/35161] client.HBaseAdmin$15(890): Started disable of Group_testCloneSnapshot_clone 2023-07-13 15:16:25,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testCloneSnapshot_clone 2023-07-13 15:16:25,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:25,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-13 15:16:25,207 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261385207"}]},"ts":"1689261385207"} 2023-07-13 15:16:25,209 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLING in hbase:meta 2023-07-13 15:16:25,210 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testCloneSnapshot_clone to state=DISABLING 2023-07-13 15:16:25,211 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=76fec091d7d06a4c933b44e27d0afad6, UNASSIGN}] 2023-07-13 15:16:25,213 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=102, ppid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=76fec091d7d06a4c933b44e27d0afad6, UNASSIGN 2023-07-13 15:16:25,213 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=76fec091d7d06a4c933b44e27d0afad6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:25,213 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689261385213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261385213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261385213"}]},"ts":"1689261385213"} 2023-07-13 15:16:25,215 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=102, state=RUNNABLE; CloseRegionProcedure 76fec091d7d06a4c933b44e27d0afad6, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:25,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-13 15:16:25,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:25,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 76fec091d7d06a4c933b44e27d0afad6, disabling compactions & flushes 2023-07-13 15:16:25,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:25,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:25,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. after waiting 0 ms 2023-07-13 15:16:25,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:25,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:25,374 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6. 2023-07-13 15:16:25,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 76fec091d7d06a4c933b44e27d0afad6: 2023-07-13 15:16:25,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:25,376 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=102 updating hbase:meta row=76fec091d7d06a4c933b44e27d0afad6, regionState=CLOSED 2023-07-13 15:16:25,376 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.","families":{"info":[{"qualifier":"regioninfo","vlen":63,"tag":[],"timestamp":"1689261385376"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261385376"}]},"ts":"1689261385376"} 2023-07-13 15:16:25,379 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=102 2023-07-13 15:16:25,379 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=102, state=SUCCESS; CloseRegionProcedure 76fec091d7d06a4c933b44e27d0afad6, server=jenkins-hbase4.apache.org,34275,1689261367926 in 162 msec 2023-07-13 15:16:25,382 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-13 15:16:25,382 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; TransitRegionStateProcedure table=Group_testCloneSnapshot_clone, region=76fec091d7d06a4c933b44e27d0afad6, UNASSIGN in 168 msec 2023-07-13 15:16:25,382 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261385382"}]},"ts":"1689261385382"} 2023-07-13 15:16:25,384 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testCloneSnapshot_clone, state=DISABLED in hbase:meta 2023-07-13 15:16:25,386 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testCloneSnapshot_clone to state=DISABLED 2023-07-13 15:16:25,388 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone in 184 msec 2023-07-13 15:16:25,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=101 2023-07-13 15:16:25,509 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testCloneSnapshot_clone, procId: 101 completed 2023-07-13 15:16:25,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testCloneSnapshot_clone 2023-07-13 15:16:25,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:25,515 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=104, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:25,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testCloneSnapshot_clone' from rsgroup 'default' 2023-07-13 15:16:25,516 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=104, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:25,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:25,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:25,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:25,520 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:25,522 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6/recovered.edits, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6/test] 2023-07-13 15:16:25,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-13 15:16:25,527 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6/recovered.edits/4.seqid 2023-07-13 15:16:25,529 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/default/Group_testCloneSnapshot_clone/76fec091d7d06a4c933b44e27d0afad6 2023-07-13 15:16:25,529 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testCloneSnapshot_clone regions 2023-07-13 15:16:25,532 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=104, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:25,534 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testCloneSnapshot_clone from hbase:meta 2023-07-13 15:16:25,535 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testCloneSnapshot_clone' descriptor. 2023-07-13 15:16:25,537 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=104, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:25,537 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testCloneSnapshot_clone' from region states. 2023-07-13 15:16:25,537 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261385537"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:25,539 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:25,540 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 76fec091d7d06a4c933b44e27d0afad6, NAME => 'Group_testCloneSnapshot_clone,,1689261383142.76fec091d7d06a4c933b44e27d0afad6.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:25,540 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testCloneSnapshot_clone' as deleted. 2023-07-13 15:16:25,540 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testCloneSnapshot_clone","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261385540"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:25,545 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testCloneSnapshot_clone state from META 2023-07-13 15:16:25,546 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=104, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:25,548 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone in 36 msec 2023-07-13 15:16:25,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=104 2023-07-13 15:16:25,624 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testCloneSnapshot_clone, procId: 104 completed 2023-07-13 15:16:25,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:25,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:25,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:25,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:25,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:25,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:25,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:25,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:25,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:25,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:25,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:25,638 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:25,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:25,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:25,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:25,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:25,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:25,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:25,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:25,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:25,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:25,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 567 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262585650, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:25,651 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:25,652 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:25,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:25,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:25,653 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:25,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:25,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:25,674 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCloneSnapshot Thread=524 (was 522) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,43693,1689261373307' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1632709426_17 at /127.0.0.1:35368 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1627220607_17 at /127.0.0.1:38672 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,41955,1689261371593' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,34275,1689261367926' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1627220607_17 at /127.0.0.1:43022 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: member: 'jenkins-hbase4.apache.org,36737,1689261368119' subprocedure-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x50aa0278-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: (jenkins-hbase4.apache.org,38141,1689261365700)-proc-coordinator-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:458) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.take(SynchronousQueue.java:924) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=799 (was 810), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 459), ProcessCount=172 (was 172), AvailableMemoryMB=4136 (was 4183) 2023-07-13 15:16:25,675 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=524 is superior to 500 2023-07-13 15:16:25,694 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=524, OpenFileDescriptor=799, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=172, AvailableMemoryMB=4140 2023-07-13 15:16:25,694 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=524 is superior to 500 2023-07-13 15:16:25,695 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:25,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:25,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:25,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:25,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:25,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:25,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:25,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:25,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:25,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:25,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:25,709 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:25,712 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:25,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:25,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:25,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:25,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:25,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:25,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:25,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:25,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:25,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:25,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 595 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262585722, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:25,722 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:25,724 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:25,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:25,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:25,725 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:25,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:25,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:25,726 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(141): testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:25,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:25,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:25,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup appInfo 2023-07-13 15:16:25,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:25,729 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:25,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:25,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:25,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:25,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:25,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:25,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34275] to rsgroup appInfo 2023-07-13 15:16:25,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:25,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:25,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:25,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:25,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 15:16:25,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34275,1689261367926] are moved back to default 2023-07-13 15:16:25,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(438): Move servers done: default => appInfo 2023-07-13 15:16:25,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:25,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:25,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:25,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=appInfo 2023-07-13 15:16:25,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:25,758 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-13 15:16:25,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.ServerManager(636): Server jenkins-hbase4.apache.org,34275,1689261367926 added to draining server list. 2023-07-13 15:16:25,760 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/draining/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:25,761 WARN [zk-event-processor-pool-0] master.ServerManager(632): Server jenkins-hbase4.apache.org,34275,1689261367926 is already in the draining server list.Ignoring request to add it again. 2023-07-13 15:16:25,761 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(92): Draining RS node created, adding to list [jenkins-hbase4.apache.org,34275,1689261367926] 2023-07-13 15:16:25,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_ns', hbase.rsgroup.name => 'appInfo'} 2023-07-13 15:16:25,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=105, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:25,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-13 15:16:25,770 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:25,773 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns in 10 msec 2023-07-13 15:16:25,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=105 2023-07-13 15:16:25,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:25,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:25,872 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=106, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:25,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 106 2023-07-13 15:16:25,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-13 15:16:25,886 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=106, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers exec-time=16 msec 2023-07-13 15:16:25,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-13 15:16:25,977 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 106 failed with No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to 2023-07-13 15:16:25,978 DEBUG [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(162): create table error org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to at java.lang.Thread.getStackTrace(Thread.java:1564) at org.apache.hadoop.hbase.util.FutureUtils.setStackTrace(FutureUtils.java:130) at org.apache.hadoop.hbase.util.FutureUtils.rethrow(FutureUtils.java:149) at org.apache.hadoop.hbase.util.FutureUtils.get(FutureUtils.java:186) at org.apache.hadoop.hbase.client.Admin.createTable(Admin.java:302) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.testCreateWhenRsgroupNoOnlineServers(TestRSGroupsBasics.java:159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) at --------Future.get--------(Unknown Source) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.validateRSGroup(RSGroupAdminEndpoint.java:540) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.moveTableToValidRSGroup(RSGroupAdminEndpoint.java:529) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateTableAction(RSGroupAdminEndpoint.java:501) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:371) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$16.call(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTableAction(MasterCoprocessorHost.java:368) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.preCreate(CreateTableProcedure.java:267) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:93) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-13 15:16:25,986 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/draining/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:25,986 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/draining 2023-07-13 15:16:25,986 INFO [zk-event-processor-pool-0] master.DrainingServerTracker(109): Draining RS node deleted, removing from list [jenkins-hbase4.apache.org,34275,1689261367926] 2023-07-13 15:16:25,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:25,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:25,992 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:25,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "Group_ns" qualifier: "testCreateWhenRsgroupNoOnlineServers" procId is: 107 2023-07-13 15:16:25,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-13 15:16:25,994 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:25,994 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:25,995 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:25,995 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:25,998 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:26,000 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,000 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01 empty. 2023-07-13 15:16:26,001 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,001 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-13 15:16:26,018 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:26,019 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(7675): creating {ENCODED => 975b440f0d132197bee267f321569e01, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_ns:testCreateWhenRsgroupNoOnlineServers', {NAME => 'f', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:26,028 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:26,028 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1604): Closing 975b440f0d132197bee267f321569e01, disabling compactions & flushes 2023-07-13 15:16:26,028 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,029 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,029 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. after waiting 0 ms 2023-07-13 15:16:26,029 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,029 INFO [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,029 DEBUG [RegionOpenAndInit-Group_ns:testCreateWhenRsgroupNoOnlineServers-pool-0] regionserver.HRegion(1558): Region close journal for 975b440f0d132197bee267f321569e01: 2023-07-13 15:16:26,031 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:26,032 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261386032"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261386032"}]},"ts":"1689261386032"} 2023-07-13 15:16:26,033 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:26,034 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:26,034 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261386034"}]},"ts":"1689261386034"} 2023-07-13 15:16:26,035 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLING in hbase:meta 2023-07-13 15:16:26,040 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=975b440f0d132197bee267f321569e01, ASSIGN}] 2023-07-13 15:16:26,042 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=975b440f0d132197bee267f321569e01, ASSIGN 2023-07-13 15:16:26,043 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=975b440f0d132197bee267f321569e01, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34275,1689261367926; forceNewPlan=false, retain=false 2023-07-13 15:16:26,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-13 15:16:26,194 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=975b440f0d132197bee267f321569e01, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:26,195 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261386194"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261386194"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261386194"}]},"ts":"1689261386194"} 2023-07-13 15:16:26,196 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; OpenRegionProcedure 975b440f0d132197bee267f321569e01, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:26,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-13 15:16:26,352 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 975b440f0d132197bee267f321569e01, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:26,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testCreateWhenRsgroupNoOnlineServers 975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:26,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,353 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,354 INFO [StoreOpener-975b440f0d132197bee267f321569e01-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,355 DEBUG [StoreOpener-975b440f0d132197bee267f321569e01-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01/f 2023-07-13 15:16:26,355 DEBUG [StoreOpener-975b440f0d132197bee267f321569e01-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01/f 2023-07-13 15:16:26,356 INFO [StoreOpener-975b440f0d132197bee267f321569e01-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 975b440f0d132197bee267f321569e01 columnFamilyName f 2023-07-13 15:16:26,356 INFO [StoreOpener-975b440f0d132197bee267f321569e01-1] regionserver.HStore(310): Store=975b440f0d132197bee267f321569e01/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:26,357 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:26,365 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 975b440f0d132197bee267f321569e01; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10952148640, jitterRate=0.019998326897621155}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:26,365 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 975b440f0d132197bee267f321569e01: 2023-07-13 15:16:26,370 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01., pid=109, masterSystemTime=1689261386348 2023-07-13 15:16:26,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,372 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,373 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=975b440f0d132197bee267f321569e01, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:26,373 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261386373"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261386373"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261386373"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261386373"}]},"ts":"1689261386373"} 2023-07-13 15:16:26,377 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-13 15:16:26,377 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; OpenRegionProcedure 975b440f0d132197bee267f321569e01, server=jenkins-hbase4.apache.org,34275,1689261367926 in 179 msec 2023-07-13 15:16:26,379 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-13 15:16:26,379 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=975b440f0d132197bee267f321569e01, ASSIGN in 337 msec 2023-07-13 15:16:26,380 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:26,380 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261386380"}]},"ts":"1689261386380"} 2023-07-13 15:16:26,381 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=ENABLED in hbase:meta 2023-07-13 15:16:26,384 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=107, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:26,385 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 395 msec 2023-07-13 15:16:26,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-13 15:16:26,597 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 107 completed 2023-07-13 15:16:26,597 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:26,602 INFO [Listener at localhost/35161] client.HBaseAdmin$15(890): Started disable of Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-13 15:16:26,606 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261386606"}]},"ts":"1689261386606"} 2023-07-13 15:16:26,607 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLING in hbase:meta 2023-07-13 15:16:26,609 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLING 2023-07-13 15:16:26,613 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=975b440f0d132197bee267f321569e01, UNASSIGN}] 2023-07-13 15:16:26,615 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=975b440f0d132197bee267f321569e01, UNASSIGN 2023-07-13 15:16:26,616 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=975b440f0d132197bee267f321569e01, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:26,616 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261386616"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261386616"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261386616"}]},"ts":"1689261386616"} 2023-07-13 15:16:26,618 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 975b440f0d132197bee267f321569e01, server=jenkins-hbase4.apache.org,34275,1689261367926}] 2023-07-13 15:16:26,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-13 15:16:26,770 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,771 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 975b440f0d132197bee267f321569e01, disabling compactions & flushes 2023-07-13 15:16:26,771 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. after waiting 0 ms 2023-07-13 15:16:26,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:26,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01. 2023-07-13 15:16:26,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 975b440f0d132197bee267f321569e01: 2023-07-13 15:16:26,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,779 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=975b440f0d132197bee267f321569e01, regionState=CLOSED 2023-07-13 15:16:26,780 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689261386779"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261386779"}]},"ts":"1689261386779"} 2023-07-13 15:16:26,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-13 15:16:26,783 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 975b440f0d132197bee267f321569e01, server=jenkins-hbase4.apache.org,34275,1689261367926 in 163 msec 2023-07-13 15:16:26,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-13 15:16:26,787 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers, region=975b440f0d132197bee267f321569e01, UNASSIGN in 170 msec 2023-07-13 15:16:26,787 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261386787"}]},"ts":"1689261386787"} 2023-07-13 15:16:26,790 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_ns:testCreateWhenRsgroupNoOnlineServers, state=DISABLED in hbase:meta 2023-07-13 15:16:26,792 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_ns:testCreateWhenRsgroupNoOnlineServers to state=DISABLED 2023-07-13 15:16:26,796 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 192 msec 2023-07-13 15:16:26,846 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:26,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-13 15:16:26,908 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 110 completed 2023-07-13 15:16:26,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,912 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from rsgroup 'appInfo' 2023-07-13 15:16:26,913 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:26,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:26,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:26,917 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:26,920 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01/f, FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01/recovered.edits] 2023-07-13 15:16:26,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-13 15:16:26,925 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01/recovered.edits/4.seqid to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01/recovered.edits/4.seqid 2023-07-13 15:16:26,925 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/Group_ns/testCreateWhenRsgroupNoOnlineServers/975b440f0d132197bee267f321569e01 2023-07-13 15:16:26,925 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_ns:testCreateWhenRsgroupNoOnlineServers regions 2023-07-13 15:16:26,928 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,929 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_ns:testCreateWhenRsgroupNoOnlineServers from hbase:meta 2023-07-13 15:16:26,931 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' descriptor. 2023-07-13 15:16:26,932 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,932 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_ns:testCreateWhenRsgroupNoOnlineServers' from region states. 2023-07-13 15:16:26,932 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689261386932"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:26,933 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 15:16:26,934 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 975b440f0d132197bee267f321569e01, NAME => 'Group_ns:testCreateWhenRsgroupNoOnlineServers,,1689261385989.975b440f0d132197bee267f321569e01.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 15:16:26,934 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_ns:testCreateWhenRsgroupNoOnlineServers' as deleted. 2023-07-13 15:16:26,934 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_ns:testCreateWhenRsgroupNoOnlineServers","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689261386934"}]},"ts":"9223372036854775807"} 2023-07-13 15:16:26,935 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_ns:testCreateWhenRsgroupNoOnlineServers state from META 2023-07-13 15:16:26,937 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:26,938 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers in 28 msec 2023-07-13 15:16:27,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-13 15:16:27,021 INFO [Listener at localhost/35161] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: Group_ns:testCreateWhenRsgroupNoOnlineServers, procId: 113 completed 2023-07-13 15:16:27,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_ns 2023-07-13 15:16:27,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:27,029 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:27,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 15:16:27,033 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:27,034 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:27,036 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_ns 2023-07-13 15:16:27,036 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 15:16:27,038 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:27,041 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=114, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:27,042 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns in 15 msec 2023-07-13 15:16:27,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-13 15:16:27,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:27,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:27,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:27,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:27,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:27,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:27,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:27,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:27,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:27,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:27,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:27,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:27,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34275] to rsgroup default 2023-07-13 15:16:27,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/appInfo 2023-07-13 15:16:27,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:27,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group appInfo, current retry=0 2023-07-13 15:16:27,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34275,1689261367926] are moved back to appInfo 2023-07-13 15:16:27,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(438): Move servers done: appInfo => default 2023-07-13 15:16:27,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:27,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup appInfo 2023-07-13 15:16:27,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:27,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:27,167 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:27,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:27,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:27,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:27,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:27,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:27,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:27,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 697 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262587178, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:27,178 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:27,180 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:27,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,181 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:27,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:27,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:27,205 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testCreateWhenRsgroupNoOnlineServers Thread=525 (was 524) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1632709426_17 at /127.0.0.1:35368 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=799 (was 799), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 438), ProcessCount=172 (was 172), AvailableMemoryMB=4122 (was 4140) 2023-07-13 15:16:27,206 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-13 15:16:27,228 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=525, OpenFileDescriptor=799, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=172, AvailableMemoryMB=4119 2023-07-13 15:16:27,228 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-13 15:16:27,228 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testBasicStartUp 2023-07-13 15:16:27,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:27,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:27,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:27,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:27,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:27,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:27,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:27,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:27,251 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:27,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:27,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:27,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:27,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:27,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:27,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:27,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 725 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262587266, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:27,267 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:27,268 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:27,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,269 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:27,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:27,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:27,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:27,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:27,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:27,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:27,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:27,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:27,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:27,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:27,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:27,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:27,287 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:27,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:27,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:27,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:27,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:27,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:27,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:27,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 755 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262587300, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:27,301 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:27,303 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:27,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,304 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:27,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:27,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:27,324 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testBasicStartUp Thread=526 (was 525) Potentially hanging thread: hconnection-0x609dbbf-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=799 (was 799), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 438), ProcessCount=172 (was 172), AvailableMemoryMB=4113 (was 4119) 2023-07-13 15:16:27,324 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=526 is superior to 500 2023-07-13 15:16:27,344 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=526, OpenFileDescriptor=799, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=172, AvailableMemoryMB=4113 2023-07-13 15:16:27,344 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=526 is superior to 500 2023-07-13 15:16:27,344 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testRSGroupsWithHBaseQuota 2023-07-13 15:16:27,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:27,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:27,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:27,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:27,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:27,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:27,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:27,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:27,363 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:27,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:27,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:27,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:27,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:27,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:27,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38141] to rsgroup master 2023-07-13 15:16:27,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:27,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] ipc.CallRunner(144): callId: 783 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34824 deadline: 1689262587376, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. 2023-07-13 15:16:27,377 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:27,379 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:27,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:27,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:27,380 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34275, jenkins-hbase4.apache.org:36737, jenkins-hbase4.apache.org:41955, jenkins-hbase4.apache.org:43693], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:27,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:27,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:27,381 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-13 15:16:27,381 INFO [Listener at localhost/35161] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 15:16:27,381 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c41c5f7 to 127.0.0.1:56695 2023-07-13 15:16:27,381 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,382 DEBUG [Listener at localhost/35161] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 15:16:27,382 DEBUG [Listener at localhost/35161] util.JVMClusterUtil(257): Found active master hash=869062456, stopped=false 2023-07-13 15:16:27,382 DEBUG [Listener at localhost/35161] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:27,385 DEBUG [Listener at localhost/35161] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:27,385 INFO [Listener at localhost/35161] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:27,387 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:27,387 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:27,387 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:27,387 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:27,387 INFO [Listener at localhost/35161] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 15:16:27,387 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:27,387 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:27,387 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:27,387 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:27,387 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:27,387 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:27,388 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3f16ae4c to 127.0.0.1:56695 2023-07-13 15:16:27,388 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,388 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34275,1689261367926' ***** 2023-07-13 15:16:27,388 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:27,388 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36737,1689261368119' ***** 2023-07-13 15:16:27,388 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:27,388 INFO [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:27,388 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:27,388 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41955,1689261371593' ***** 2023-07-13 15:16:27,389 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:27,388 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:27,392 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:27,392 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43693,1689261373307' ***** 2023-07-13 15:16:27,397 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:27,399 INFO [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:27,400 INFO [RS:2;jenkins-hbase4:36737] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4ad0acc{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:27,400 INFO [RS:1;jenkins-hbase4:34275] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7c8e2ea{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:27,400 INFO [RS:3;jenkins-hbase4:41955] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2c20dc81{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:27,401 INFO [RS:2;jenkins-hbase4:36737] server.AbstractConnector(383): Stopped ServerConnector@5a58295b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:27,401 INFO [RS:2;jenkins-hbase4:36737] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:27,401 INFO [RS:1;jenkins-hbase4:34275] server.AbstractConnector(383): Stopped ServerConnector@12e45025{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:27,401 INFO [RS:3;jenkins-hbase4:41955] server.AbstractConnector(383): Stopped ServerConnector@8ad847b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:27,401 INFO [RS:1;jenkins-hbase4:34275] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:27,401 INFO [RS:3;jenkins-hbase4:41955] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:27,403 INFO [RS:1;jenkins-hbase4:34275] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@38834d1b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:27,403 INFO [RS:3;jenkins-hbase4:41955] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3b9dbcbd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:27,403 INFO [RS:2;jenkins-hbase4:36737] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ae3c253{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:27,404 INFO [RS:1;jenkins-hbase4:34275] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@425bb8be{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:27,405 INFO [RS:3;jenkins-hbase4:41955] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4b22a6fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:27,405 INFO [RS:2;jenkins-hbase4:36737] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@50d1e028{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:27,405 INFO [RS:4;jenkins-hbase4:43693] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1899cb6d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:27,406 INFO [RS:1;jenkins-hbase4:34275] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:27,406 INFO [RS:1;jenkins-hbase4:34275] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:27,406 INFO [RS:3;jenkins-hbase4:41955] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:27,406 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:27,406 INFO [RS:3;jenkins-hbase4:41955] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:27,406 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:27,406 INFO [RS:1;jenkins-hbase4:34275] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:27,406 INFO [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:27,406 DEBUG [RS:1;jenkins-hbase4:34275] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4d641f4d to 127.0.0.1:56695 2023-07-13 15:16:27,406 DEBUG [RS:1;jenkins-hbase4:34275] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,406 INFO [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34275,1689261367926; all regions closed. 2023-07-13 15:16:27,406 INFO [RS:2;jenkins-hbase4:36737] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:27,407 INFO [RS:2;jenkins-hbase4:36737] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:27,407 INFO [RS:2;jenkins-hbase4:36737] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:27,407 INFO [RS:4;jenkins-hbase4:43693] server.AbstractConnector(383): Stopped ServerConnector@9b30d54{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:27,407 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(3305): Received CLOSE for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:27,406 INFO [RS:3;jenkins-hbase4:41955] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:27,407 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(3305): Received CLOSE for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:27,407 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:27,407 DEBUG [RS:2;jenkins-hbase4:36737] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x021739ce to 127.0.0.1:56695 2023-07-13 15:16:27,407 INFO [RS:4;jenkins-hbase4:43693] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:27,407 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:27,411 INFO [RS:4;jenkins-hbase4:43693] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67ca648{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:27,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8f9b3c3c0c701a7e057738cfe2a31027, disabling compactions & flushes 2023-07-13 15:16:27,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 111352044b1bd403da18db964c499c82, disabling compactions & flushes 2023-07-13 15:16:27,416 INFO [RS:4;jenkins-hbase4:43693] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78bafdff{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:27,410 DEBUG [RS:2;jenkins-hbase4:36737] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,407 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:27,417 DEBUG [RS:3;jenkins-hbase4:41955] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1702f866 to 127.0.0.1:56695 2023-07-13 15:16:27,417 DEBUG [RS:3;jenkins-hbase4:41955] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,417 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 15:16:27,417 DEBUG [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1478): Online Regions={111352044b1bd403da18db964c499c82=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.} 2023-07-13 15:16:27,416 INFO [RS:2;jenkins-hbase4:36737] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:27,418 INFO [RS:2;jenkins-hbase4:36737] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:27,418 INFO [RS:2;jenkins-hbase4:36737] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:27,418 INFO [RS:4;jenkins-hbase4:43693] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:27,418 INFO [RS:4;jenkins-hbase4:43693] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:27,419 INFO [RS:4;jenkins-hbase4:43693] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:27,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:27,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:27,419 INFO [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:27,419 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:27,419 DEBUG [RS:4;jenkins-hbase4:43693] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x53eeb546 to 127.0.0.1:56695 2023-07-13 15:16:27,419 DEBUG [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1504): Waiting on 111352044b1bd403da18db964c499c82 2023-07-13 15:16:27,419 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:27,418 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:27,418 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 15:16:27,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. after waiting 0 ms 2023-07-13 15:16:27,420 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:27,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 111352044b1bd403da18db964c499c82 1/1 column families, dataSize=15.26 KB heapSize=24.78 KB 2023-07-13 15:16:27,420 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-13 15:16:27,420 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1478): Online Regions={8f9b3c3c0c701a7e057738cfe2a31027=hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027., 1588230740=hbase:meta,,1.1588230740} 2023-07-13 15:16:27,420 DEBUG [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1504): Waiting on 1588230740, 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:27,421 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:27,419 DEBUG [RS:4;jenkins-hbase4:43693] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,419 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. after waiting 0 ms 2023-07-13 15:16:27,422 INFO [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43693,1689261373307; all regions closed. 2023-07-13 15:16:27,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:27,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8f9b3c3c0c701a7e057738cfe2a31027 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-13 15:16:27,426 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:27,426 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:27,426 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:27,426 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:27,426 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=15.27 KB heapSize=25.58 KB 2023-07-13 15:16:27,432 DEBUG [RS:1;jenkins-hbase4:34275] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:27,432 INFO [RS:1;jenkins-hbase4:34275] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34275%2C1689261367926.meta:.meta(num 1689261370444) 2023-07-13 15:16:27,440 DEBUG [RS:4;jenkins-hbase4:43693] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:27,440 INFO [RS:4;jenkins-hbase4:43693] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43693%2C1689261373307:(num 1689261373643) 2023-07-13 15:16:27,440 DEBUG [RS:4;jenkins-hbase4:43693] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,440 INFO [RS:4;jenkins-hbase4:43693] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:27,441 INFO [RS:4;jenkins-hbase4:43693] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:27,442 INFO [RS:4;jenkins-hbase4:43693] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:27,442 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:27,442 INFO [RS:4;jenkins-hbase4:43693] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:27,442 INFO [RS:4;jenkins-hbase4:43693] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:27,445 DEBUG [RS:1;jenkins-hbase4:34275] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:27,445 INFO [RS:1;jenkins-hbase4:34275] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34275%2C1689261367926:(num 1689261370185) 2023-07-13 15:16:27,445 DEBUG [RS:1;jenkins-hbase4:34275] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,445 INFO [RS:1;jenkins-hbase4:34275] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:27,452 INFO [RS:4;jenkins-hbase4:43693] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43693 2023-07-13 15:16:27,456 INFO [RS:1;jenkins-hbase4:34275] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:27,456 INFO [RS:1;jenkins-hbase4:34275] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:27,457 INFO [RS:1;jenkins-hbase4:34275] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:27,457 INFO [RS:1;jenkins-hbase4:34275] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:27,457 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:27,458 INFO [RS:1;jenkins-hbase4:34275] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34275 2023-07-13 15:16:27,466 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:27,466 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:27,466 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:27,466 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:27,467 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=15.26 KB at sequenceid=73 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/a4b8548565d54812ae823f3bc7af5c62 2023-07-13 15:16:27,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=17 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/.tmp/info/0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:27,472 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=12.69 KB at sequenceid=148 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/3e2c8a0d327b43a89086a648a1aed48b 2023-07-13 15:16:27,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a4b8548565d54812ae823f3bc7af5c62 2023-07-13 15:16:27,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:27,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/a4b8548565d54812ae823f3bc7af5c62 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/a4b8548565d54812ae823f3bc7af5c62 2023-07-13 15:16:27,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/.tmp/info/0c14a3c301924edd9435fdf2dd29da5a as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:27,486 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e2c8a0d327b43a89086a648a1aed48b 2023-07-13 15:16:27,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a4b8548565d54812ae823f3bc7af5c62 2023-07-13 15:16:27,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/a4b8548565d54812ae823f3bc7af5c62, entries=21, sequenceid=73, filesize=5.7 K 2023-07-13 15:16:27,488 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~15.26 KB/15630, heapSize ~24.77 KB/25360, currentSize=0 B/0 for 111352044b1bd403da18db964c499c82 in 68ms, sequenceid=73, compaction requested=false 2023-07-13 15:16:27,488 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:27,489 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/0c14a3c301924edd9435fdf2dd29da5a, entries=2, sequenceid=17, filesize=4.9 K 2023-07-13 15:16:27,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 8f9b3c3c0c701a7e057738cfe2a31027 in 68ms, sequenceid=17, compaction requested=false 2023-07-13 15:16:27,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/recovered.edits/20.seqid, newMaxSeqId=20, maxSeqId=10 2023-07-13 15:16:27,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/recovered.edits/76.seqid, newMaxSeqId=76, maxSeqId=12 2023-07-13 15:16:27,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:27,511 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:27,512 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:27,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:27,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:27,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:27,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:27,522 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=555 B at sequenceid=148 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/rep_barrier/e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:27,524 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:27,524 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:27,524 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:27,524 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34275,1689261367926 2023-07-13 15:16:27,524 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:27,524 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:27,524 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:27,524 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:27,525 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:27,525 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:27,525 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:27,524 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:27,525 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43693,1689261373307 2023-07-13 15:16:27,526 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34275,1689261367926] 2023-07-13 15:16:27,526 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34275,1689261367926; numProcessing=1 2023-07-13 15:16:27,528 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34275,1689261367926 already deleted, retry=false 2023-07-13 15:16:27,529 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34275,1689261367926 expired; onlineServers=3 2023-07-13 15:16:27,529 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43693,1689261373307] 2023-07-13 15:16:27,529 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43693,1689261373307; numProcessing=2 2023-07-13 15:16:27,529 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:27,530 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43693,1689261373307 already deleted, retry=false 2023-07-13 15:16:27,530 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43693,1689261373307 expired; onlineServers=2 2023-07-13 15:16:27,541 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.04 KB at sequenceid=148 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/table/d67a1478638944cbab7e2c10f09f1d65 2023-07-13 15:16:27,547 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d67a1478638944cbab7e2c10f09f1d65 2023-07-13 15:16:27,548 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/3e2c8a0d327b43a89086a648a1aed48b as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/3e2c8a0d327b43a89086a648a1aed48b 2023-07-13 15:16:27,553 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e2c8a0d327b43a89086a648a1aed48b 2023-07-13 15:16:27,553 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/3e2c8a0d327b43a89086a648a1aed48b, entries=20, sequenceid=148, filesize=7.1 K 2023-07-13 15:16:27,554 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/rep_barrier/e29818aea41d432187d74bea6ea06843 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:27,566 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:27,566 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/e29818aea41d432187d74bea6ea06843, entries=5, sequenceid=148, filesize=5.5 K 2023-07-13 15:16:27,569 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/table/d67a1478638944cbab7e2c10f09f1d65 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/d67a1478638944cbab7e2c10f09f1d65 2023-07-13 15:16:27,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d67a1478638944cbab7e2c10f09f1d65 2023-07-13 15:16:27,576 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/d67a1478638944cbab7e2c10f09f1d65, entries=10, sequenceid=148, filesize=5.7 K 2023-07-13 15:16:27,577 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~15.27 KB/15641, heapSize ~25.53 KB/26144, currentSize=0 B/0 for 1588230740 in 151ms, sequenceid=148, compaction requested=false 2023-07-13 15:16:27,598 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/recovered.edits/151.seqid, newMaxSeqId=151, maxSeqId=82 2023-07-13 15:16:27,599 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:27,599 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:27,600 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:27,600 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:27,620 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41955,1689261371593; all regions closed. 2023-07-13 15:16:27,621 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36737,1689261368119; all regions closed. 2023-07-13 15:16:27,634 DEBUG [RS:3;jenkins-hbase4:41955] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:27,634 INFO [RS:3;jenkins-hbase4:41955] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41955%2C1689261371593:(num 1689261371958) 2023-07-13 15:16:27,634 DEBUG [RS:3;jenkins-hbase4:41955] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,634 INFO [RS:3;jenkins-hbase4:41955] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:27,634 INFO [RS:3;jenkins-hbase4:41955] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:27,634 INFO [RS:3;jenkins-hbase4:41955] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:27,634 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:27,634 INFO [RS:3;jenkins-hbase4:41955] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:27,634 INFO [RS:3;jenkins-hbase4:41955] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:27,636 INFO [RS:3;jenkins-hbase4:41955] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41955 2023-07-13 15:16:27,639 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:27,639 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41955,1689261371593 2023-07-13 15:16:27,639 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:27,640 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41955,1689261371593] 2023-07-13 15:16:27,640 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41955,1689261371593; numProcessing=3 2023-07-13 15:16:27,641 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41955,1689261371593 already deleted, retry=false 2023-07-13 15:16:27,641 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41955,1689261371593 expired; onlineServers=1 2023-07-13 15:16:27,647 DEBUG [RS:2;jenkins-hbase4:36737] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:27,647 INFO [RS:2;jenkins-hbase4:36737] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36737%2C1689261368119.meta:.meta(num 1689261378843) 2023-07-13 15:16:27,655 DEBUG [RS:2;jenkins-hbase4:36737] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:27,656 INFO [RS:2;jenkins-hbase4:36737] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36737%2C1689261368119:(num 1689261370185) 2023-07-13 15:16:27,656 DEBUG [RS:2;jenkins-hbase4:36737] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,656 INFO [RS:2;jenkins-hbase4:36737] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:27,656 INFO [RS:2;jenkins-hbase4:36737] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:27,656 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:27,657 INFO [RS:2;jenkins-hbase4:36737] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36737 2023-07-13 15:16:27,659 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36737,1689261368119 2023-07-13 15:16:27,659 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:27,659 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36737,1689261368119] 2023-07-13 15:16:27,659 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36737,1689261368119; numProcessing=4 2023-07-13 15:16:27,662 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36737,1689261368119 already deleted, retry=false 2023-07-13 15:16:27,662 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36737,1689261368119 expired; onlineServers=0 2023-07-13 15:16:27,662 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38141,1689261365700' ***** 2023-07-13 15:16:27,662 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 15:16:27,663 DEBUG [M:0;jenkins-hbase4:38141] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72f1dd03, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:27,663 INFO [M:0;jenkins-hbase4:38141] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:27,665 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:27,665 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:27,666 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:27,666 INFO [M:0;jenkins-hbase4:38141] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3dee0740{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:27,666 INFO [M:0;jenkins-hbase4:38141] server.AbstractConnector(383): Stopped ServerConnector@6449840d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:27,666 INFO [M:0;jenkins-hbase4:38141] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:27,667 INFO [M:0;jenkins-hbase4:38141] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6d756d8e{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:27,667 INFO [M:0;jenkins-hbase4:38141] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f33f59f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:27,668 INFO [M:0;jenkins-hbase4:38141] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38141,1689261365700 2023-07-13 15:16:27,668 INFO [M:0;jenkins-hbase4:38141] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38141,1689261365700; all regions closed. 2023-07-13 15:16:27,668 DEBUG [M:0;jenkins-hbase4:38141] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:27,668 INFO [M:0;jenkins-hbase4:38141] master.HMaster(1491): Stopping master jetty server 2023-07-13 15:16:27,669 INFO [M:0;jenkins-hbase4:38141] server.AbstractConnector(383): Stopped ServerConnector@7082f2b7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:27,669 DEBUG [M:0;jenkins-hbase4:38141] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 15:16:27,669 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 15:16:27,669 DEBUG [M:0;jenkins-hbase4:38141] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 15:16:27,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261369697] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261369697,5,FailOnTimeoutGroup] 2023-07-13 15:16:27,670 INFO [M:0;jenkins-hbase4:38141] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 15:16:27,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261369697] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261369697,5,FailOnTimeoutGroup] 2023-07-13 15:16:27,670 INFO [M:0;jenkins-hbase4:38141] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 15:16:27,670 INFO [M:0;jenkins-hbase4:38141] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-13 15:16:27,670 DEBUG [M:0;jenkins-hbase4:38141] master.HMaster(1512): Stopping service threads 2023-07-13 15:16:27,670 INFO [M:0;jenkins-hbase4:38141] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 15:16:27,671 ERROR [M:0;jenkins-hbase4:38141] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-13 15:16:27,671 INFO [M:0;jenkins-hbase4:38141] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 15:16:27,672 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 15:16:27,672 DEBUG [M:0;jenkins-hbase4:38141] zookeeper.ZKUtil(398): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 15:16:27,672 WARN [M:0;jenkins-hbase4:38141] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 15:16:27,672 INFO [M:0;jenkins-hbase4:38141] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 15:16:27,672 INFO [M:0;jenkins-hbase4:38141] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 15:16:27,672 DEBUG [M:0;jenkins-hbase4:38141] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:16:27,672 INFO [M:0;jenkins-hbase4:38141] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:27,672 DEBUG [M:0;jenkins-hbase4:38141] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:27,672 DEBUG [M:0;jenkins-hbase4:38141] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:16:27,672 DEBUG [M:0;jenkins-hbase4:38141] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:27,673 INFO [M:0;jenkins-hbase4:38141] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=382.76 KB heapSize=456.51 KB 2023-07-13 15:16:27,698 INFO [M:0;jenkins-hbase4:38141] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=382.76 KB at sequenceid=844 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a726e1ea1b1541a09f18088d5513b202 2023-07-13 15:16:27,705 DEBUG [M:0;jenkins-hbase4:38141] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a726e1ea1b1541a09f18088d5513b202 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a726e1ea1b1541a09f18088d5513b202 2023-07-13 15:16:27,711 INFO [M:0;jenkins-hbase4:38141] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a726e1ea1b1541a09f18088d5513b202, entries=114, sequenceid=844, filesize=26.1 K 2023-07-13 15:16:27,714 INFO [M:0;jenkins-hbase4:38141] regionserver.HRegion(2948): Finished flush of dataSize ~382.76 KB/391942, heapSize ~456.49 KB/467448, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 41ms, sequenceid=844, compaction requested=false 2023-07-13 15:16:27,717 INFO [M:0;jenkins-hbase4:38141] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:27,717 DEBUG [M:0;jenkins-hbase4:38141] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:27,731 INFO [M:0;jenkins-hbase4:38141] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 15:16:27,732 INFO [M:0;jenkins-hbase4:38141] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38141 2023-07-13 15:16:27,732 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:27,734 DEBUG [M:0;jenkins-hbase4:38141] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38141,1689261365700 already deleted, retry=false 2023-07-13 15:16:27,988 INFO [M:0;jenkins-hbase4:38141] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38141,1689261365700; zookeeper connection closed. 2023-07-13 15:16:27,988 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:27,988 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:38141-0x1015f4159470000, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:28,088 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:28,088 INFO [RS:2;jenkins-hbase4:36737] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36737,1689261368119; zookeeper connection closed. 2023-07-13 15:16:28,088 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:36737-0x1015f4159470003, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:28,089 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6ee37307] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6ee37307 2023-07-13 15:16:28,188 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:28,189 INFO [RS:3;jenkins-hbase4:41955] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41955,1689261371593; zookeeper connection closed. 2023-07-13 15:16:28,189 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:41955-0x1015f415947000b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:28,189 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4d82670f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4d82670f 2023-07-13 15:16:28,289 INFO [RS:4;jenkins-hbase4:43693] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43693,1689261373307; zookeeper connection closed. 2023-07-13 15:16:28,289 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:28,289 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43693-0x1015f415947000d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:28,289 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@62244a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@62244a 2023-07-13 15:16:28,389 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:28,389 INFO [RS:1;jenkins-hbase4:34275] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34275,1689261367926; zookeeper connection closed. 2023-07-13 15:16:28,389 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:34275-0x1015f4159470002, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:28,389 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@47f60454] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@47f60454 2023-07-13 15:16:28,389 INFO [Listener at localhost/35161] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-13 15:16:28,390 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-13 15:16:30,391 DEBUG [Listener at localhost/35161] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 15:16:30,391 DEBUG [Listener at localhost/35161] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 15:16:30,391 DEBUG [Listener at localhost/35161] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 15:16:30,391 DEBUG [Listener at localhost/35161] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 15:16:30,392 INFO [Listener at localhost/35161] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:30,392 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,392 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,392 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:30,392 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,392 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:30,393 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:30,393 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46509 2023-07-13 15:16:30,394 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:30,395 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:30,396 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46509 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:30,400 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:465090x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:30,401 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46509-0x1015f4159470010 connected 2023-07-13 15:16:30,404 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:30,404 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:30,404 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:30,405 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46509 2023-07-13 15:16:30,405 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46509 2023-07-13 15:16:30,406 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46509 2023-07-13 15:16:30,406 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46509 2023-07-13 15:16:30,407 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46509 2023-07-13 15:16:30,409 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:30,409 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:30,409 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:30,410 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 15:16:30,410 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:30,410 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:30,410 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:30,411 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 42377 2023-07-13 15:16:30,411 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:30,419 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,420 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@615705c2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:30,420 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,420 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@40d99cb9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:30,550 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:30,551 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:30,551 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:30,552 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:30,555 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,557 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@77a33bbe{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-42377-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3003505931423332191/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:30,559 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@50727a01{HTTP/1.1, (http/1.1)}{0.0.0.0:42377} 2023-07-13 15:16:30,559 INFO [Listener at localhost/35161] server.Server(415): Started @30559ms 2023-07-13 15:16:30,559 INFO [Listener at localhost/35161] master.HMaster(444): hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046, hbase.cluster.distributed=false 2023-07-13 15:16:30,564 DEBUG [pool-351-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-13 15:16:30,579 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:30,579 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,579 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,579 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:30,579 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,579 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:30,580 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:30,589 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33739 2023-07-13 15:16:30,590 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:30,595 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:30,596 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:30,598 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:30,599 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33739 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:30,604 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:337390x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:30,605 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33739-0x1015f4159470011 connected 2023-07-13 15:16:30,605 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:30,606 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:30,606 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:30,608 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33739 2023-07-13 15:16:30,608 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33739 2023-07-13 15:16:30,608 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33739 2023-07-13 15:16:30,610 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33739 2023-07-13 15:16:30,611 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33739 2023-07-13 15:16:30,613 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:30,613 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:30,613 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:30,613 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:30,614 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:30,614 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:30,614 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:30,614 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 37387 2023-07-13 15:16:30,615 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:30,616 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,616 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@31d1104{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:30,616 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,617 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@267fa735{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:30,751 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:30,752 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:30,752 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:30,752 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:30,753 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,754 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6b9ffb31{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-37387-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8767018397388069693/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:30,755 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@2e30a10a{HTTP/1.1, (http/1.1)}{0.0.0.0:37387} 2023-07-13 15:16:30,755 INFO [Listener at localhost/35161] server.Server(415): Started @30755ms 2023-07-13 15:16:30,766 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:30,766 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,767 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,767 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:30,767 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,767 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:30,767 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:30,768 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38071 2023-07-13 15:16:30,768 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:30,769 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:30,770 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:30,771 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:30,771 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38071 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:30,775 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:380710x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:30,777 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:380710x0, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:30,777 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38071-0x1015f4159470012 connected 2023-07-13 15:16:30,777 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:30,778 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:30,779 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38071 2023-07-13 15:16:30,779 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38071 2023-07-13 15:16:30,781 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38071 2023-07-13 15:16:30,781 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38071 2023-07-13 15:16:30,782 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38071 2023-07-13 15:16:30,783 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:30,783 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:30,784 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:30,784 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:30,784 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:30,784 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:30,784 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:30,785 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 43893 2023-07-13 15:16:30,785 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:30,789 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,789 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@28eeeebb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:30,790 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,790 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@bad01f9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:30,926 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:30,929 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:30,929 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:30,930 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:30,932 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,933 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1784eed4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-43893-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5178476143578293750/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:30,934 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@59ce34f3{HTTP/1.1, (http/1.1)}{0.0.0.0:43893} 2023-07-13 15:16:30,935 INFO [Listener at localhost/35161] server.Server(415): Started @30934ms 2023-07-13 15:16:30,950 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:30,951 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,951 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,951 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:30,951 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:30,951 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:30,951 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:30,952 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43473 2023-07-13 15:16:30,953 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:30,959 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:30,960 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:30,961 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:30,963 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43473 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:30,970 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:434730x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:30,971 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:434730x0, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:30,973 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43473-0x1015f4159470013 connected 2023-07-13 15:16:30,973 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:30,974 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:30,975 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43473 2023-07-13 15:16:30,975 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43473 2023-07-13 15:16:30,980 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43473 2023-07-13 15:16:30,980 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43473 2023-07-13 15:16:30,981 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43473 2023-07-13 15:16:30,983 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:30,983 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:30,983 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:30,984 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:30,984 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:30,984 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:30,985 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:30,985 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 45697 2023-07-13 15:16:30,985 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:30,990 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,990 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@15f914be{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:30,991 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:30,991 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6343c989{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:31,135 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:31,136 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:31,136 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:31,136 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:31,137 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:31,138 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@61080f19{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-45697-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1628003618529537376/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:31,140 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@68a6e4a6{HTTP/1.1, (http/1.1)}{0.0.0.0:45697} 2023-07-13 15:16:31,141 INFO [Listener at localhost/35161] server.Server(415): Started @31140ms 2023-07-13 15:16:31,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:31,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@1304bd11{HTTP/1.1, (http/1.1)}{0.0.0.0:39313} 2023-07-13 15:16:31,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @31152ms 2023-07-13 15:16:31,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,154 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:31,154 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,157 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:31,157 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:31,157 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:31,157 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:31,158 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:31,164 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:31,165 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:31,165 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46509,1689261390391 from backup master directory 2023-07-13 15:16:31,166 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,166 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:31,166 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:31,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,188 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:31,222 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x7c2ecdb2 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:31,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@451d7c7f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:31,234 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:31,235 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 15:16:31,236 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:31,245 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,38141,1689261365700 to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,38141,1689261365700-dead as it is dead 2023-07-13 15:16:31,246 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,38141,1689261365700-dead/jenkins-hbase4.apache.org%2C38141%2C1689261365700.1689261368864 2023-07-13 15:16:31,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,38141,1689261365700-dead/jenkins-hbase4.apache.org%2C38141%2C1689261365700.1689261368864 after 4ms 2023-07-13 15:16:31,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,38141,1689261365700-dead/jenkins-hbase4.apache.org%2C38141%2C1689261365700.1689261368864 to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C38141%2C1689261365700.1689261368864 2023-07-13 15:16:31,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,38141,1689261365700-dead 2023-07-13 15:16:31,252 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46509%2C1689261390391, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46509,1689261390391, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/oldWALs, maxLogs=10 2023-07-13 15:16:31,272 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:31,278 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:31,283 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:31,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46509,1689261390391/jenkins-hbase4.apache.org%2C46509%2C1689261390391.1689261391255 2023-07-13 15:16:31,294 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK]] 2023-07-13 15:16:31,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:31,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:31,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:31,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:31,298 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:31,300 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 15:16:31,301 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 15:16:31,308 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a726e1ea1b1541a09f18088d5513b202 2023-07-13 15:16:31,308 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:31,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-13 15:16:31,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C38141%2C1689261365700.1689261368864 2023-07-13 15:16:31,350 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 995, firstSequenceIdInLog=3, maxSequenceIdInLog=846, path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C38141%2C1689261365700.1689261368864 2023-07-13 15:16:31,354 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C38141%2C1689261365700.1689261368864 2023-07-13 15:16:31,358 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:31,365 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/846.seqid, newMaxSeqId=846, maxSeqId=1 2023-07-13 15:16:31,367 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=847; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11940344480, jitterRate=0.1120312362909317}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:31,367 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:31,371 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 15:16:31,372 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 15:16:31,372 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 15:16:31,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 15:16:31,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-13 15:16:31,386 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:31,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-13 15:16:31,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-13 15:16:31,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-13 15:16:31,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-13 15:16:31,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE 2023-07-13 15:16:31,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=15, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,33725,1689261367727, splitWal=true, meta=false 2023-07-13 15:16:31,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=16, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-13 15:16:31,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=17, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:31,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=24, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=45, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:31,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=66, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:31,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=67, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE 2023-07-13 15:16:31,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=68, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 15:16:31,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=73, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:31,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:31,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=77, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:31,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=80, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:31,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=81, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:31,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:31,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=85, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:31,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:31,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=89, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:31,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=92, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-13 15:16:31,393 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=93, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-13 15:16:31,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689261383767 type: FLUSH version: 2 ttl: 0 ) 2023-07-13 15:16:31,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=97, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:31,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:31,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:31,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:31,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=105, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:31,395 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=106, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:31,395 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:31,395 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:31,396 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=113, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:31,396 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=114, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:31,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 22 msec 2023-07-13 15:16:31,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 15:16:31,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-13 15:16:31,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase4.apache.org,36737,1689261368119, table=hbase:meta, region=1588230740 2023-07-13 15:16:31,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 4 possibly 'live' servers, and 0 'splitting'. 2023-07-13 15:16:31,402 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41955,1689261371593 already deleted, retry=false 2023-07-13 15:16:31,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,41955,1689261371593 on jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=115, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,41955,1689261371593, splitWal=true, meta=false 2023-07-13 15:16:31,406 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=115 for jenkins-hbase4.apache.org,41955,1689261371593 (carryingMeta=false) jenkins-hbase4.apache.org,41955,1689261371593/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@14376a7[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-13 15:16:31,408 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36737,1689261368119 already deleted, retry=false 2023-07-13 15:16:31,408 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,36737,1689261368119 on jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,409 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,36737,1689261368119, splitWal=true, meta=true 2023-07-13 15:16:31,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=116 for jenkins-hbase4.apache.org,36737,1689261368119 (carryingMeta=true) jenkins-hbase4.apache.org,36737,1689261368119/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@186a8936[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-13 15:16:31,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34275,1689261367926 already deleted, retry=false 2023-07-13 15:16:31,411 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,34275,1689261367926 on jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,34275,1689261367926, splitWal=true, meta=false 2023-07-13 15:16:31,411 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=117 for jenkins-hbase4.apache.org,34275,1689261367926 (carryingMeta=false) jenkins-hbase4.apache.org,34275,1689261367926/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@f79b2fa[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-13 15:16:31,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43693,1689261373307 already deleted, retry=false 2023-07-13 15:16:31,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,43693,1689261373307 on jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,413 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=118, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,43693,1689261373307, splitWal=true, meta=false 2023-07-13 15:16:31,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=118 for jenkins-hbase4.apache.org,43693,1689261373307 (carryingMeta=false) jenkins-hbase4.apache.org,43693,1689261373307/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@1b00baab[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-13 15:16:31,414 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-13 15:16:31,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 15:16:31,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 15:16:31,415 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 15:16:31,416 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 15:16:31,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 15:16:31,424 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:31,424 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:31,424 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:31,424 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:31,424 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:31,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46509,1689261390391, sessionid=0x1015f4159470010, setting cluster-up flag (Was=false) 2023-07-13 15:16:31,430 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 15:16:31,430 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,433 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 15:16:31,434 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:31,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 15:16:31,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 15:16:31,438 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-13 15:16:31,438 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 15:16:31,439 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:31,439 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 15:16:31,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-13 15:16:31,444 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:31,445 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:36737 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:36737 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:31,446 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:36737 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:36737 2023-07-13 15:16:31,447 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:31,448 DEBUG [RS:1;jenkins-hbase4:38071] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:31,448 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:31,449 DEBUG [RS:0;jenkins-hbase4:33739] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:31,450 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:31,451 DEBUG [RS:2;jenkins-hbase4:43473] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:31,454 DEBUG [RS:0;jenkins-hbase4:33739] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:31,454 DEBUG [RS:0;jenkins-hbase4:33739] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:31,456 DEBUG [RS:0;jenkins-hbase4:33739] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:31,457 DEBUG [RS:1;jenkins-hbase4:38071] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:31,457 DEBUG [RS:1;jenkins-hbase4:38071] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:31,458 DEBUG [RS:0;jenkins-hbase4:33739] zookeeper.ReadOnlyZKClient(139): Connect 0x4426426e to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:31,461 DEBUG [RS:1;jenkins-hbase4:38071] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:31,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:31,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:31,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:31,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:31,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:31,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:31,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:31,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:31,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-13 15:16:31,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:31,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,466 DEBUG [RS:2;jenkins-hbase4:43473] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:31,467 DEBUG [RS:2;jenkins-hbase4:43473] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:31,474 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689261421474 2023-07-13 15:16:31,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 15:16:31,476 DEBUG [RS:1;jenkins-hbase4:38071] zookeeper.ReadOnlyZKClient(139): Connect 0x2f40121a to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:31,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 15:16:31,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 15:16:31,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 15:16:31,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 15:16:31,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 15:16:31,479 DEBUG [RS:2;jenkins-hbase4:43473] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:31,483 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,484 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36737,1689261368119; numProcessing=1 2023-07-13 15:16:31,484 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=116, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,36737,1689261368119, splitWal=true, meta=true 2023-07-13 15:16:31,484 DEBUG [RS:0;jenkins-hbase4:33739] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ae1a49a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:31,484 DEBUG [RS:2;jenkins-hbase4:43473] zookeeper.ReadOnlyZKClient(139): Connect 0x1fdf75f2 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:31,484 DEBUG [RS:0;jenkins-hbase4:33739] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ede291e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:31,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 15:16:31,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 15:16:31,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 15:16:31,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 15:16:31,489 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 15:16:31,489 DEBUG [PEWorker-4] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43693,1689261373307; numProcessing=2 2023-07-13 15:16:31,489 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41955,1689261371593; numProcessing=3 2023-07-13 15:16:31,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261391489,5,FailOnTimeoutGroup] 2023-07-13 15:16:31,490 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=115, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,41955,1689261371593, splitWal=true, meta=false 2023-07-13 15:16:31,489 INFO [PEWorker-4] procedure.ServerCrashProcedure(161): Start pid=118, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43693,1689261373307, splitWal=true, meta=false 2023-07-13 15:16:31,490 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34275,1689261367926; numProcessing=4 2023-07-13 15:16:31,490 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=116, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,36737,1689261368119, splitWal=true, meta=true, isMeta: true 2023-07-13 15:16:31,490 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=117, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,34275,1689261367926, splitWal=true, meta=false 2023-07-13 15:16:31,492 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119-splitting 2023-07-13 15:16:31,493 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119-splitting dir is empty, no logs to split. 2023-07-13 15:16:31,493 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,36737,1689261368119 WAL count=0, meta=true 2023-07-13 15:16:31,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261391490,5,FailOnTimeoutGroup] 2023-07-13 15:16:31,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 15:16:31,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689261391505, completionTime=-1 2023-07-13 15:16:31,505 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-13 15:16:31,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-13 15:16:31,506 DEBUG [RS:1;jenkins-hbase4:38071] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2187f982, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:31,507 DEBUG [RS:1;jenkins-hbase4:38071] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25b8796a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:31,512 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119-splitting dir is empty, no logs to split. 2023-07-13 15:16:31,512 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,36737,1689261368119 WAL count=0, meta=true 2023-07-13 15:16:31,512 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,36737,1689261368119 WAL splitting is done? wals=0, meta=true 2023-07-13 15:16:31,515 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 15:16:31,518 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=119, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 15:16:31,524 DEBUG [RS:0;jenkins-hbase4:33739] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33739 2023-07-13 15:16:31,524 INFO [RS:0;jenkins-hbase4:33739] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:31,524 INFO [RS:0;jenkins-hbase4:33739] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:31,524 DEBUG [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:31,525 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=119, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-13 15:16:31,526 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46509,1689261390391 with isa=jenkins-hbase4.apache.org/172.31.14.131:33739, startcode=1689261390578 2023-07-13 15:16:31,526 DEBUG [RS:0;jenkins-hbase4:33739] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:31,527 DEBUG [RS:2;jenkins-hbase4:43473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11189879, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:31,527 DEBUG [RS:2;jenkins-hbase4:43473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15271e82, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:31,527 DEBUG [RS:1;jenkins-hbase4:38071] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38071 2023-07-13 15:16:31,527 INFO [RS:1;jenkins-hbase4:38071] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:31,527 INFO [RS:1;jenkins-hbase4:38071] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:31,527 DEBUG [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:31,528 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46509,1689261390391 with isa=jenkins-hbase4.apache.org/172.31.14.131:38071, startcode=1689261390766 2023-07-13 15:16:31,528 DEBUG [RS:1;jenkins-hbase4:38071] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:31,529 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40779, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:31,530 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46509] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,530 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:31,531 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 15:16:31,532 DEBUG [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:31,532 DEBUG [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:31,532 DEBUG [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42377 2023-07-13 15:16:31,535 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44507, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:31,535 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46509] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:31,536 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:31,536 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 15:16:31,536 DEBUG [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:31,536 DEBUG [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:31,536 DEBUG [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42377 2023-07-13 15:16:31,538 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:43473 2023-07-13 15:16:31,538 INFO [RS:2;jenkins-hbase4:43473] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:31,538 INFO [RS:2;jenkins-hbase4:43473] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:31,538 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:31,539 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:31,540 DEBUG [RS:0;jenkins-hbase4:33739] zookeeper.ZKUtil(162): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,540 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46509,1689261390391 with isa=jenkins-hbase4.apache.org/172.31.14.131:43473, startcode=1689261390950 2023-07-13 15:16:31,540 WARN [RS:0;jenkins-hbase4:33739] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:31,540 DEBUG [RS:2;jenkins-hbase4:43473] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:31,540 INFO [RS:0;jenkins-hbase4:33739] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:31,540 DEBUG [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,541 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35635, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:31,541 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46509] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:31,542 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:31,542 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 15:16:31,542 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:31,542 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:31,542 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42377 2023-07-13 15:16:31,544 DEBUG [RS:1;jenkins-hbase4:38071] zookeeper.ZKUtil(162): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:31,544 WARN [RS:1;jenkins-hbase4:38071] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:31,544 INFO [RS:1;jenkins-hbase4:38071] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:31,544 DEBUG [RS:2;jenkins-hbase4:43473] zookeeper.ZKUtil(162): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:31,544 DEBUG [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:31,544 WARN [RS:2;jenkins-hbase4:43473] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:31,544 INFO [RS:2;jenkins-hbase4:43473] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:31,544 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:31,547 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:36737 this server is in the failed servers list 2023-07-13 15:16:31,552 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33739,1689261390578] 2023-07-13 15:16:31,552 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43473,1689261390950] 2023-07-13 15:16:31,552 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38071,1689261390766] 2023-07-13 15:16:31,555 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=50ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-13 15:16:31,565 DEBUG [RS:0;jenkins-hbase4:33739] zookeeper.ZKUtil(162): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,565 DEBUG [RS:2;jenkins-hbase4:43473] zookeeper.ZKUtil(162): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,565 DEBUG [RS:1;jenkins-hbase4:38071] zookeeper.ZKUtil(162): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,565 DEBUG [RS:0;jenkins-hbase4:33739] zookeeper.ZKUtil(162): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:31,565 DEBUG [RS:1;jenkins-hbase4:38071] zookeeper.ZKUtil(162): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:31,565 DEBUG [RS:2;jenkins-hbase4:43473] zookeeper.ZKUtil(162): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:31,566 DEBUG [RS:0;jenkins-hbase4:33739] zookeeper.ZKUtil(162): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:31,566 DEBUG [RS:1;jenkins-hbase4:38071] zookeeper.ZKUtil(162): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:31,566 DEBUG [RS:2;jenkins-hbase4:43473] zookeeper.ZKUtil(162): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:31,567 DEBUG [RS:1;jenkins-hbase4:38071] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:31,567 INFO [RS:1;jenkins-hbase4:38071] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:31,568 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:31,568 INFO [RS:2;jenkins-hbase4:43473] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:31,572 INFO [RS:1;jenkins-hbase4:38071] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:31,583 INFO [RS:1;jenkins-hbase4:38071] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:31,583 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,585 DEBUG [RS:0;jenkins-hbase4:33739] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:31,585 INFO [RS:0;jenkins-hbase4:33739] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:31,591 INFO [RS:2;jenkins-hbase4:43473] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:31,604 INFO [RS:0;jenkins-hbase4:33739] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:31,617 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:31,619 INFO [RS:2;jenkins-hbase4:43473] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:31,619 INFO [RS:0;jenkins-hbase4:33739] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:31,619 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,619 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,620 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:31,622 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:31,623 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,623 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,623 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,623 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,623 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,624 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:31,624 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,624 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:31,624 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,625 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,625 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,625 DEBUG [RS:0;jenkins-hbase4:33739] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,624 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,625 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,625 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,625 DEBUG [RS:1;jenkins-hbase4:38071] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,625 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,625 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,625 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,625 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,625 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:31,625 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,625 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,626 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,626 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,626 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,626 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,626 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,626 DEBUG [RS:2;jenkins-hbase4:43473] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:31,626 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,626 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,629 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,629 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,629 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,629 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,640 INFO [RS:0;jenkins-hbase4:33739] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:31,640 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33739,1689261390578-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,640 INFO [RS:2;jenkins-hbase4:43473] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:31,640 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43473,1689261390950-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,645 INFO [RS:1;jenkins-hbase4:38071] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:31,645 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38071,1689261390766-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,651 INFO [RS:2;jenkins-hbase4:43473] regionserver.Replication(203): jenkins-hbase4.apache.org,43473,1689261390950 started 2023-07-13 15:16:31,651 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43473,1689261390950, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43473, sessionid=0x1015f4159470013 2023-07-13 15:16:31,651 INFO [RS:0;jenkins-hbase4:33739] regionserver.Replication(203): jenkins-hbase4.apache.org,33739,1689261390578 started 2023-07-13 15:16:31,651 DEBUG [RS:2;jenkins-hbase4:43473] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:31,651 DEBUG [RS:2;jenkins-hbase4:43473] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:31,651 DEBUG [RS:2;jenkins-hbase4:43473] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43473,1689261390950' 2023-07-13 15:16:31,651 DEBUG [RS:2;jenkins-hbase4:43473] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:31,651 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33739,1689261390578, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33739, sessionid=0x1015f4159470011 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33739,1689261390578' 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:31,652 DEBUG [RS:2;jenkins-hbase4:43473] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:31,652 DEBUG [RS:2;jenkins-hbase4:43473] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:31,652 DEBUG [RS:2;jenkins-hbase4:43473] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:31,652 DEBUG [RS:2;jenkins-hbase4:43473] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:31,652 DEBUG [RS:2;jenkins-hbase4:43473] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43473,1689261390950' 2023-07-13 15:16:31,652 DEBUG [RS:2;jenkins-hbase4:43473] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33739,1689261390578' 2023-07-13 15:16:31,652 DEBUG [RS:0;jenkins-hbase4:33739] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:31,653 DEBUG [RS:2;jenkins-hbase4:43473] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:31,653 DEBUG [RS:0;jenkins-hbase4:33739] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:31,653 DEBUG [RS:2;jenkins-hbase4:43473] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:31,653 INFO [RS:2;jenkins-hbase4:43473] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 15:16:31,653 DEBUG [RS:0;jenkins-hbase4:33739] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:31,653 INFO [RS:0;jenkins-hbase4:33739] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 15:16:31,655 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,655 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,656 DEBUG [RS:0;jenkins-hbase4:33739] zookeeper.ZKUtil(398): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 15:16:31,656 DEBUG [RS:2;jenkins-hbase4:43473] zookeeper.ZKUtil(398): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 15:16:31,656 INFO [RS:0;jenkins-hbase4:33739] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 15:16:31,656 INFO [RS:2;jenkins-hbase4:43473] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 15:16:31,656 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,656 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,657 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,657 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,660 INFO [RS:1;jenkins-hbase4:38071] regionserver.Replication(203): jenkins-hbase4.apache.org,38071,1689261390766 started 2023-07-13 15:16:31,660 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38071,1689261390766, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38071, sessionid=0x1015f4159470012 2023-07-13 15:16:31,660 DEBUG [RS:1;jenkins-hbase4:38071] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:31,660 DEBUG [RS:1;jenkins-hbase4:38071] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:31,660 DEBUG [RS:1;jenkins-hbase4:38071] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38071,1689261390766' 2023-07-13 15:16:31,660 DEBUG [RS:1;jenkins-hbase4:38071] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:31,661 DEBUG [RS:1;jenkins-hbase4:38071] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:31,661 DEBUG [RS:1;jenkins-hbase4:38071] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:31,661 DEBUG [RS:1;jenkins-hbase4:38071] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:31,661 DEBUG [RS:1;jenkins-hbase4:38071] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:31,661 DEBUG [RS:1;jenkins-hbase4:38071] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38071,1689261390766' 2023-07-13 15:16:31,661 DEBUG [RS:1;jenkins-hbase4:38071] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:31,661 DEBUG [RS:1;jenkins-hbase4:38071] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:31,662 DEBUG [RS:1;jenkins-hbase4:38071] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:31,662 INFO [RS:1;jenkins-hbase4:38071] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 15:16:31,662 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,662 DEBUG [RS:1;jenkins-hbase4:38071] zookeeper.ZKUtil(398): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 15:16:31,662 INFO [RS:1;jenkins-hbase4:38071] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 15:16:31,662 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,662 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:31,676 DEBUG [jenkins-hbase4:46509] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 15:16:31,676 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:31,676 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:31,676 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:31,676 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:31,676 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:31,678 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33739,1689261390578, state=OPENING 2023-07-13 15:16:31,680 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:31,680 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:31,680 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=120, ppid=119, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33739,1689261390578}] 2023-07-13 15:16:31,749 WARN [ReadOnlyZKClient-127.0.0.1:56695@0x7c2ecdb2] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 15:16:31,750 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:31,751 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51344, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:31,752 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33739] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:51344 deadline: 1689261451752, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,760 INFO [RS:0;jenkins-hbase4:33739] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33739%2C1689261390578, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:31,760 INFO [RS:2;jenkins-hbase4:43473] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43473%2C1689261390950, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43473,1689261390950, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:31,764 INFO [RS:1;jenkins-hbase4:38071] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38071%2C1689261390766, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,38071,1689261390766, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:31,784 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:31,784 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:31,784 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:31,792 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:31,792 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:31,792 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:31,795 INFO [RS:2;jenkins-hbase4:43473] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43473,1689261390950/jenkins-hbase4.apache.org%2C43473%2C1689261390950.1689261391768 2023-07-13 15:16:31,796 INFO [RS:0;jenkins-hbase4:33739] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578/jenkins-hbase4.apache.org%2C33739%2C1689261390578.1689261391768 2023-07-13 15:16:31,798 DEBUG [RS:2;jenkins-hbase4:43473] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK]] 2023-07-13 15:16:31,801 DEBUG [RS:0;jenkins-hbase4:33739] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:31,801 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:31,801 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:31,801 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:31,803 INFO [RS:1;jenkins-hbase4:38071] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,38071,1689261390766/jenkins-hbase4.apache.org%2C38071%2C1689261390766.1689261391768 2023-07-13 15:16:31,803 DEBUG [RS:1;jenkins-hbase4:38071] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK]] 2023-07-13 15:16:31,833 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:31,834 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:31,835 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51352, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:31,839 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 15:16:31,839 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:31,840 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33739%2C1689261390578.meta, suffix=.meta, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:31,854 DEBUG [RS-EventLoopGroup-12-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:31,854 DEBUG [RS-EventLoopGroup-12-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:31,854 DEBUG [RS-EventLoopGroup-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:31,854 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:31,856 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578/jenkins-hbase4.apache.org%2C33739%2C1689261390578.meta.1689261391841.meta 2023-07-13 15:16:31,857 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK], DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK]] 2023-07-13 15:16:31,858 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:31,858 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:31,858 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 15:16:31,858 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 15:16:31,858 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 15:16:31,858 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:31,858 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 15:16:31,858 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 15:16:31,864 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:31,865 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:31,865 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:31,865 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:31,889 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e2c8a0d327b43a89086a648a1aed48b 2023-07-13 15:16:31,890 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/3e2c8a0d327b43a89086a648a1aed48b 2023-07-13 15:16:31,907 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:31,910 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:31,911 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:31,911 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:31,912 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:31,912 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:31,913 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:31,923 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:31,923 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:31,929 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:31,929 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:31,929 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:31,930 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:31,931 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:31,931 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:31,932 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:31,954 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:31,954 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:31,962 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d67a1478638944cbab7e2c10f09f1d65 2023-07-13 15:16:31,962 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/d67a1478638944cbab7e2c10f09f1d65 2023-07-13 15:16:31,962 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:31,963 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:31,964 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:31,967 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:31,969 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:31,970 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=152; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11169980800, jitterRate=0.04028552770614624}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:31,970 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:31,971 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=120, masterSystemTime=1689261391833 2023-07-13 15:16:31,976 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 15:16:31,977 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 15:16:31,977 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33739,1689261390578, state=OPEN 2023-07-13 15:16:31,979 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:31,979 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:31,982 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=120, resume processing ppid=119 2023-07-13 15:16:31,982 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, ppid=119, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33739,1689261390578 in 300 msec 2023-07-13 15:16:31,984 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=116 2023-07-13 15:16:31,984 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=116, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 467 msec 2023-07-13 15:16:32,070 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:32,071 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:41955 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41955 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:32,072 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:41955 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41955 2023-07-13 15:16:32,177 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:41955 this server is in the failed servers list 2023-07-13 15:16:32,383 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:41955 this server is in the failed servers list 2023-07-13 15:16:32,691 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:41955 this server is in the failed servers list 2023-07-13 15:16:33,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1554ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1504ms 2023-07-13 15:16:33,200 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:41955 this server is in the failed servers list 2023-07-13 15:16:34,207 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:41955 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41955 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:34,209 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:41955 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41955 2023-07-13 15:16:34,562 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3057ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3007ms 2023-07-13 15:16:35,996 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-13 15:16:35,996 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver Metrics about HBase MasterObservers 2023-07-13 15:16:36,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4509ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-13 15:16:36,014 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 15:16:36,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,36737,1689261368119, regionLocation=jenkins-hbase4.apache.org,36737,1689261368119, openSeqNum=11 2023-07-13 15:16:36,022 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=111352044b1bd403da18db964c499c82, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,41955,1689261371593, regionLocation=jenkins-hbase4.apache.org,41955,1689261371593, openSeqNum=13 2023-07-13 15:16:36,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 15:16:36,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689261456022 2023-07-13 15:16:36,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689261516022 2023-07-13 15:16:36,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-07-13 15:16:36,044 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,36737,1689261368119 had 2 regions 2023-07-13 15:16:36,047 INFO [PEWorker-4] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,34275,1689261367926 had 0 regions 2023-07-13 15:16:36,047 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46509,1689261390391-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:36,047 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,43693,1689261373307 had 0 regions 2023-07-13 15:16:36,047 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46509,1689261390391-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:36,048 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,41955,1689261371593 had 1 regions 2023-07-13 15:16:36,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46509,1689261390391-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:36,049 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46509, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:36,049 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:36,049 INFO [PEWorker-4] procedure.ServerCrashProcedure(300): Splitting WALs pid=117, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,34275,1689261367926, splitWal=true, meta=false, isMeta: false 2023-07-13 15:16:36,049 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=118, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43693,1689261373307, splitWal=true, meta=false, isMeta: false 2023-07-13 15:16:36,054 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=116, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,36737,1689261368119, splitWal=true, meta=true, isMeta: false 2023-07-13 15:16:36,054 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=115, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,41955,1689261371593, splitWal=true, meta=false, isMeta: false 2023-07-13 15:16:36,054 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. is NOT online; state={8f9b3c3c0c701a7e057738cfe2a31027 state=OPEN, ts=1689261396021, server=jenkins-hbase4.apache.org,36737,1689261368119}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-13 15:16:36,055 DEBUG [PEWorker-4] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,34275,1689261367926-splitting 2023-07-13 15:16:36,057 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,34275,1689261367926-splitting dir is empty, no logs to split. 2023-07-13 15:16:36,057 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase4.apache.org,34275,1689261367926 WAL count=0, meta=false 2023-07-13 15:16:36,058 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43693,1689261373307-splitting 2023-07-13 15:16:36,062 WARN [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase4.apache.org,36737,1689261368119/hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027., unknown_server=jenkins-hbase4.apache.org,41955,1689261371593/hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:36,067 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43693,1689261373307-splitting dir is empty, no logs to split. 2023-07-13 15:16:36,067 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,43693,1689261373307 WAL count=0, meta=false 2023-07-13 15:16:36,071 DEBUG [PEWorker-5] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,41955,1689261371593-splitting 2023-07-13 15:16:36,071 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119-splitting dir is empty, no logs to split. 2023-07-13 15:16:36,071 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,36737,1689261368119 WAL count=0, meta=false 2023-07-13 15:16:36,072 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,41955,1689261371593-splitting dir is empty, no logs to split. 2023-07-13 15:16:36,072 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,41955,1689261371593 WAL count=0, meta=false 2023-07-13 15:16:36,073 INFO [PEWorker-4] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,34275,1689261367926-splitting dir is empty, no logs to split. 2023-07-13 15:16:36,073 INFO [PEWorker-4] master.SplitWALManager(106): jenkins-hbase4.apache.org,34275,1689261367926 WAL count=0, meta=false 2023-07-13 15:16:36,073 DEBUG [PEWorker-4] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,34275,1689261367926 WAL splitting is done? wals=0, meta=false 2023-07-13 15:16:36,074 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43693,1689261373307-splitting dir is empty, no logs to split. 2023-07-13 15:16:36,074 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,43693,1689261373307 WAL count=0, meta=false 2023-07-13 15:16:36,074 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,43693,1689261373307 WAL splitting is done? wals=0, meta=false 2023-07-13 15:16:36,076 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,36737,1689261368119-splitting dir is empty, no logs to split. 2023-07-13 15:16:36,076 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,36737,1689261368119 WAL count=0, meta=false 2023-07-13 15:16:36,076 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,36737,1689261368119 WAL splitting is done? wals=0, meta=false 2023-07-13 15:16:36,077 INFO [PEWorker-4] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,34275,1689261367926 failed, ignore...File hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,34275,1689261367926-splitting does not exist. 2023-07-13 15:16:36,077 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,41955,1689261371593-splitting dir is empty, no logs to split. 2023-07-13 15:16:36,077 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,41955,1689261371593 WAL count=0, meta=false 2023-07-13 15:16:36,077 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,41955,1689261371593 WAL splitting is done? wals=0, meta=false 2023-07-13 15:16:36,080 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN}] 2023-07-13 15:16:36,080 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,43693,1689261373307 failed, ignore...File hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43693,1689261373307-splitting does not exist. 2023-07-13 15:16:36,081 INFO [PEWorker-4] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,34275,1689261367926 after splitting done 2023-07-13 15:16:36,081 DEBUG [PEWorker-4] master.DeadServer(114): Removed jenkins-hbase4.apache.org,34275,1689261367926 from processing; numProcessing=3 2023-07-13 15:16:36,082 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN 2023-07-13 15:16:36,082 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,43693,1689261373307 after splitting done 2023-07-13 15:16:36,082 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase4.apache.org,43693,1689261373307 from processing; numProcessing=2 2023-07-13 15:16:36,082 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-13 15:16:36,082 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,34275,1689261367926, splitWal=true, meta=false in 4.6700 sec 2023-07-13 15:16:36,083 INFO [PEWorker-5] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,41955,1689261371593 failed, ignore...File hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,41955,1689261371593-splitting does not exist. 2023-07-13 15:16:36,083 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=115, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN}] 2023-07-13 15:16:36,084 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=118, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,43693,1689261373307, splitWal=true, meta=false in 4.6690 sec 2023-07-13 15:16:36,084 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=122, ppid=115, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN 2023-07-13 15:16:36,085 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=122, ppid=115, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-13 15:16:36,085 DEBUG [jenkins-hbase4:46509] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 15:16:36,085 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:36,085 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:36,085 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:36,085 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:36,085 DEBUG [jenkins-hbase4:46509] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-13 15:16:36,087 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:36,087 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=122 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:36,087 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261396087"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261396087"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261396087"}]},"ts":"1689261396087"} 2023-07-13 15:16:36,087 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261396087"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261396087"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261396087"}]},"ts":"1689261396087"} 2023-07-13 15:16:36,093 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=123, ppid=121, state=RUNNABLE; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,38071,1689261390766}] 2023-07-13 15:16:36,103 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=122, state=RUNNABLE; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,43473,1689261390950}] 2023-07-13 15:16:36,215 WARN [RS-EventLoopGroup-12-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:41955 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41955 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:36,217 DEBUG [RS-EventLoopGroup-12-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:41955 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41955 2023-07-13 15:16:36,217 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4154 ms ago, cancelled=false, msg=Call to address=jenkins-hbase4.apache.org/172.31.14.131:41955 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41955, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., hostname=jenkins-hbase4.apache.org,41955,1689261371593, seqNum=13, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase4.apache.org/172.31.14.131:41955 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41955 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:41955 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:36,247 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:36,247 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:36,248 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34240, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:36,257 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:36,257 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:36,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f9b3c3c0c701a7e057738cfe2a31027, NAME => 'hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:36,257 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:36,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:36,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:36,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:36,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:36,258 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36704, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:36,259 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:36,260 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:36,261 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:36,261 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f9b3c3c0c701a7e057738cfe2a31027 columnFamilyName info 2023-07-13 15:16:36,262 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:36,262 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 111352044b1bd403da18db964c499c82, NAME => 'hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:36,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:36,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. service=MultiRowMutationService 2023-07-13 15:16:36,263 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 15:16:36,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 111352044b1bd403da18db964c499c82 2023-07-13 15:16:36,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:36,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:36,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:36,268 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 111352044b1bd403da18db964c499c82 2023-07-13 15:16:36,269 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:36,269 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:36,270 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 111352044b1bd403da18db964c499c82 columnFamilyName m 2023-07-13 15:16:36,272 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:36,273 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:36,282 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/8877d21c56c24ede9d59119e77b5fd77 2023-07-13 15:16:36,282 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(310): Store=8f9b3c3c0c701a7e057738cfe2a31027/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:36,283 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:36,285 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:36,287 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/9cc85fdd31c84a01b1065fb63289ca00 2023-07-13 15:16:36,288 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:36,289 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8f9b3c3c0c701a7e057738cfe2a31027; next sequenceid=21; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10741223840, jitterRate=3.544241189956665E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:36,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:36,292 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027., pid=123, masterSystemTime=1689261396247 2023-07-13 15:16:36,295 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a4b8548565d54812ae823f3bc7af5c62 2023-07-13 15:16:36,295 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/a4b8548565d54812ae823f3bc7af5c62 2023-07-13 15:16:36,295 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(310): Store=111352044b1bd403da18db964c499c82/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:36,295 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:36,296 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:36,296 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPEN, openSeqNum=21, regionLocation=jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:36,297 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261396296"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261396296"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261396296"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261396296"}]},"ts":"1689261396296"} 2023-07-13 15:16:36,297 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:36,299 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:36,302 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:36,302 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=123, resume processing ppid=121 2023-07-13 15:16:36,303 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=123, ppid=121, state=SUCCESS; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,38071,1689261390766 in 205 msec 2023-07-13 15:16:36,304 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 111352044b1bd403da18db964c499c82; next sequenceid=77; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@78d1813c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:36,304 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:36,304 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=116 2023-07-13 15:16:36,304 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,36737,1689261368119 after splitting done 2023-07-13 15:16:36,304 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=116, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN in 225 msec 2023-07-13 15:16:36,304 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,36737,1689261368119 from processing; numProcessing=1 2023-07-13 15:16:36,306 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,36737,1689261368119, splitWal=true, meta=true in 4.8960 sec 2023-07-13 15:16:36,307 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., pid=124, masterSystemTime=1689261396257 2023-07-13 15:16:36,309 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:36,310 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:36,310 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=122 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPEN, openSeqNum=77, regionLocation=jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:36,311 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261396310"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261396310"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261396310"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261396310"}]},"ts":"1689261396310"} 2023-07-13 15:16:36,315 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=122 2023-07-13 15:16:36,315 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=122, state=SUCCESS; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,43473,1689261390950 in 209 msec 2023-07-13 15:16:36,317 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=115 2023-07-13 15:16:36,317 INFO [PEWorker-1] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,41955,1689261371593 after splitting done 2023-07-13 15:16:36,317 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=115, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN in 232 msec 2023-07-13 15:16:36,317 DEBUG [PEWorker-1] master.DeadServer(114): Removed jenkins-hbase4.apache.org,41955,1689261371593 from processing; numProcessing=0 2023-07-13 15:16:36,318 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,41955,1689261371593, splitWal=true, meta=false in 4.9150 sec 2023-07-13 15:16:37,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-13 15:16:37,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:37,063 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34246, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:37,076 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 15:16:37,080 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 15:16:37,080 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.913sec 2023-07-13 15:16:37,080 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-13 15:16:37,081 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:37,081 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=125, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-13 15:16:37,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-13 15:16:37,083 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 15:16:37,084 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 15:16:37,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-13 15:16:37,086 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,086 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157 empty. 2023-07-13 15:16:37,087 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,087 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-13 15:16:37,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-13 15:16:37,089 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-13 15:16:37,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:37,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:37,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 15:16:37,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 15:16:37,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46509,1689261390391-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 15:16:37,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46509,1689261390391-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 15:16:37,094 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 15:16:37,100 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-13 15:16:37,101 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6f63fe7474be7b61966d8c0a666e0157, NAME => 'hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.tmp 2023-07-13 15:16:37,111 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:37,111 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 6f63fe7474be7b61966d8c0a666e0157, disabling compactions & flushes 2023-07-13 15:16:37,111 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:37,111 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:37,111 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. after waiting 0 ms 2023-07-13 15:16:37,111 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:37,111 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:37,111 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 6f63fe7474be7b61966d8c0a666e0157: 2023-07-13 15:16:37,114 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 15:16:37,115 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261397115"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261397115"}]},"ts":"1689261397115"} 2023-07-13 15:16:37,116 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 15:16:37,117 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 15:16:37,117 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261397117"}]},"ts":"1689261397117"} 2023-07-13 15:16:37,119 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-13 15:16:37,124 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:37,124 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:37,124 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:37,124 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:37,124 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:37,124 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=126, ppid=125, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, ASSIGN}] 2023-07-13 15:16:37,126 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, ppid=125, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, ASSIGN 2023-07-13 15:16:37,127 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, ppid=125, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43473,1689261390950; forceNewPlan=false, retain=false 2023-07-13 15:16:37,149 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(139): Connect 0x480057b7 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:37,154 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a5dc603, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:37,156 DEBUG [hconnection-0x4d6821d7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:37,157 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51366, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:37,164 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-13 15:16:37,165 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x480057b7 to 127.0.0.1:56695 2023-07-13 15:16:37,165 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:37,166 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase4.apache.org:46509 after: jenkins-hbase4.apache.org:46509 2023-07-13 15:16:37,166 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(139): Connect 0x1217676f to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:37,172 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e7d8a63, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:37,172 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:37,277 INFO [jenkins-hbase4:46509] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 15:16:37,278 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6f63fe7474be7b61966d8c0a666e0157, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:37,279 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261397278"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261397278"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261397278"}]},"ts":"1689261397278"} 2023-07-13 15:16:37,280 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; OpenRegionProcedure 6f63fe7474be7b61966d8c0a666e0157, server=jenkins-hbase4.apache.org,43473,1689261390950}] 2023-07-13 15:16:37,360 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:37,436 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:37,436 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6f63fe7474be7b61966d8c0a666e0157, NAME => 'hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:37,436 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,436 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:37,436 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,436 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,438 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,439 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/q 2023-07-13 15:16:37,440 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/q 2023-07-13 15:16:37,440 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6f63fe7474be7b61966d8c0a666e0157 columnFamilyName q 2023-07-13 15:16:37,441 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(310): Store=6f63fe7474be7b61966d8c0a666e0157/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:37,441 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,442 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/u 2023-07-13 15:16:37,442 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/u 2023-07-13 15:16:37,443 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6f63fe7474be7b61966d8c0a666e0157 columnFamilyName u 2023-07-13 15:16:37,443 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(310): Store=6f63fe7474be7b61966d8c0a666e0157/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:37,444 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,445 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,447 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-13 15:16:37,448 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:37,451 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 15:16:37,452 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6f63fe7474be7b61966d8c0a666e0157; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10067486880, jitterRate=-0.0623922199010849}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-13 15:16:37,452 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6f63fe7474be7b61966d8c0a666e0157: 2023-07-13 15:16:37,453 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157., pid=127, masterSystemTime=1689261397432 2023-07-13 15:16:37,454 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:37,454 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:37,455 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=6f63fe7474be7b61966d8c0a666e0157, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:37,455 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261397454"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261397454"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261397454"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261397454"}]},"ts":"1689261397454"} 2023-07-13 15:16:37,457 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-13 15:16:37,457 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; OpenRegionProcedure 6f63fe7474be7b61966d8c0a666e0157, server=jenkins-hbase4.apache.org,43473,1689261390950 in 176 msec 2023-07-13 15:16:37,459 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=126, resume processing ppid=125 2023-07-13 15:16:37,459 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=126, ppid=125, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, ASSIGN in 333 msec 2023-07-13 15:16:37,459 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 15:16:37,460 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689261397460"}]},"ts":"1689261397460"} 2023-07-13 15:16:37,461 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-13 15:16:37,463 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=125, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 15:16:37,464 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, state=SUCCESS; CreateTableProcedure table=hbase:quota in 382 msec 2023-07-13 15:16:37,567 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 15:16:37,568 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 15:16:37,568 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-13 15:16:37,586 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 15:16:40,226 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:40,228 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36710, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:40,228 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 15:16:40,228 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 15:16:40,238 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:40,239 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:40,239 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:40,241 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-13 15:16:40,241 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46509,1689261390391] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 15:16:40,275 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 15:16:40,277 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37904, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 15:16:40,279 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-13 15:16:40,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46509] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-13 15:16:40,280 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(139): Connect 0x35dd3ab6 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:40,289 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44207ae0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:40,289 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:40,290 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [90,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:40,292 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:40,293 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015f415947001b connected 2023-07-13 15:16:40,293 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:40,294 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51380, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:40,302 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(309): Shutting down cluster 2023-07-13 15:16:40,302 INFO [Listener at localhost/35161] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 15:16:40,302 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1217676f to 127.0.0.1:56695 2023-07-13 15:16:40,302 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:40,302 DEBUG [Listener at localhost/35161] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 15:16:40,302 DEBUG [Listener at localhost/35161] util.JVMClusterUtil(257): Found active master hash=1946064107, stopped=false 2023-07-13 15:16:40,302 DEBUG [Listener at localhost/35161] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:40,302 DEBUG [Listener at localhost/35161] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:40,302 DEBUG [Listener at localhost/35161] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-13 15:16:40,302 INFO [Listener at localhost/35161] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:40,304 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:40,304 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:40,304 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:40,304 INFO [Listener at localhost/35161] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 15:16:40,304 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:40,304 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:40,305 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:40,306 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:40,306 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:40,306 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c2ecdb2 to 127.0.0.1:56695 2023-07-13 15:16:40,306 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:40,306 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33739,1689261390578' ***** 2023-07-13 15:16:40,306 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:40,307 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38071,1689261390766' ***** 2023-07-13 15:16:40,307 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:40,307 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:40,307 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43473,1689261390950' ***** 2023-07-13 15:16:40,307 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:40,308 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:40,308 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:40,315 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:40,320 INFO [RS:0;jenkins-hbase4:33739] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6b9ffb31{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:40,320 INFO [RS:0;jenkins-hbase4:33739] server.AbstractConnector(383): Stopped ServerConnector@2e30a10a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:40,320 INFO [RS:0;jenkins-hbase4:33739] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:40,321 INFO [RS:0;jenkins-hbase4:33739] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@267fa735{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:40,321 INFO [RS:0;jenkins-hbase4:33739] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@31d1104{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:40,321 INFO [RS:0;jenkins-hbase4:33739] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:40,321 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:40,321 INFO [RS:0;jenkins-hbase4:33739] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:40,322 INFO [RS:0;jenkins-hbase4:33739] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:40,322 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:40,322 DEBUG [RS:0;jenkins-hbase4:33739] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4426426e to 127.0.0.1:56695 2023-07-13 15:16:40,322 DEBUG [RS:0;jenkins-hbase4:33739] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:40,322 INFO [RS:0;jenkins-hbase4:33739] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:40,322 INFO [RS:0;jenkins-hbase4:33739] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:40,322 INFO [RS:0;jenkins-hbase4:33739] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:40,322 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 15:16:40,322 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 15:16:40,322 DEBUG [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-13 15:16:40,322 DEBUG [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-13 15:16:40,326 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:40,328 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:40,328 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:40,329 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:40,329 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:40,329 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.05 KB heapSize=5.87 KB 2023-07-13 15:16:40,330 INFO [RS:2;jenkins-hbase4:43473] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@61080f19{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:40,330 INFO [RS:1;jenkins-hbase4:38071] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1784eed4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:40,330 INFO [RS:2;jenkins-hbase4:43473] server.AbstractConnector(383): Stopped ServerConnector@68a6e4a6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:40,331 INFO [RS:1;jenkins-hbase4:38071] server.AbstractConnector(383): Stopped ServerConnector@59ce34f3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:40,331 INFO [RS:2;jenkins-hbase4:43473] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:40,331 INFO [RS:1;jenkins-hbase4:38071] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:40,331 INFO [RS:2;jenkins-hbase4:43473] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6343c989{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:40,331 INFO [RS:1;jenkins-hbase4:38071] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@bad01f9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:40,331 INFO [RS:2;jenkins-hbase4:43473] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@15f914be{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:40,331 INFO [RS:1;jenkins-hbase4:38071] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@28eeeebb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:40,332 INFO [RS:2;jenkins-hbase4:43473] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:40,332 INFO [RS:1;jenkins-hbase4:38071] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:40,332 INFO [RS:2;jenkins-hbase4:43473] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:40,332 INFO [RS:1;jenkins-hbase4:38071] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:40,332 INFO [RS:1;jenkins-hbase4:38071] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:40,332 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:40,332 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(3305): Received CLOSE for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:40,333 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:40,333 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:40,334 DEBUG [RS:1;jenkins-hbase4:38071] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2f40121a to 127.0.0.1:56695 2023-07-13 15:16:40,334 DEBUG [RS:1;jenkins-hbase4:38071] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:40,334 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 15:16:40,332 INFO [RS:2;jenkins-hbase4:43473] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:40,332 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:40,334 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(3305): Received CLOSE for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:40,334 DEBUG [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1478): Online Regions={8f9b3c3c0c701a7e057738cfe2a31027=hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.} 2023-07-13 15:16:40,335 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:40,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8f9b3c3c0c701a7e057738cfe2a31027, disabling compactions & flushes 2023-07-13 15:16:40,335 DEBUG [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1504): Waiting on 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:40,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:40,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:40,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. after waiting 0 ms 2023-07-13 15:16:40,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:40,336 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:40,340 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(3305): Received CLOSE for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:40,340 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:40,340 DEBUG [RS:2;jenkins-hbase4:43473] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1fdf75f2 to 127.0.0.1:56695 2023-07-13 15:16:40,340 DEBUG [RS:2;jenkins-hbase4:43473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:40,340 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-13 15:16:40,340 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1478): Online Regions={111352044b1bd403da18db964c499c82=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., 6f63fe7474be7b61966d8c0a666e0157=hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.} 2023-07-13 15:16:40,340 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1504): Waiting on 111352044b1bd403da18db964c499c82, 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:40,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 111352044b1bd403da18db964c499c82, disabling compactions & flushes 2023-07-13 15:16:40,347 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:40,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:40,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. after waiting 0 ms 2023-07-13 15:16:40,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:40,347 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 111352044b1bd403da18db964c499c82 1/1 column families, dataSize=242 B heapSize=648 B 2023-07-13 15:16:40,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/recovered.edits/23.seqid, newMaxSeqId=23, maxSeqId=20 2023-07-13 15:16:40,357 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:40,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:40,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:40,373 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.97 KB at sequenceid=163 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/e331f406b6624624a6f0dd5ce8e3b5ca 2023-07-13 15:16:40,405 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=86 B at sequenceid=163 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/table/fac7b86466ef4efabec576fae39af302 2023-07-13 15:16:40,411 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/e331f406b6624624a6f0dd5ce8e3b5ca as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/e331f406b6624624a6f0dd5ce8e3b5ca 2023-07-13 15:16:40,417 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/e331f406b6624624a6f0dd5ce8e3b5ca, entries=26, sequenceid=163, filesize=7.7 K 2023-07-13 15:16:40,418 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/table/fac7b86466ef4efabec576fae39af302 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/fac7b86466ef4efabec576fae39af302 2023-07-13 15:16:40,424 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/fac7b86466ef4efabec576fae39af302, entries=2, sequenceid=163, filesize=4.7 K 2023-07-13 15:16:40,426 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.05 KB/3126, heapSize ~5.59 KB/5720, currentSize=0 B/0 for 1588230740 in 97ms, sequenceid=163, compaction requested=true 2023-07-13 15:16:40,454 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/recovered.edits/166.seqid, newMaxSeqId=166, maxSeqId=151 2023-07-13 15:16:40,454 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:40,455 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:40,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:40,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:40,523 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33739,1689261390578; all regions closed. 2023-07-13 15:16:40,523 DEBUG [RS:0;jenkins-hbase4:33739] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 15:16:40,530 DEBUG [RS:0;jenkins-hbase4:33739] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:40,530 INFO [RS:0;jenkins-hbase4:33739] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33739%2C1689261390578.meta:.meta(num 1689261391841) 2023-07-13 15:16:40,535 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38071,1689261390766; all regions closed. 2023-07-13 15:16:40,535 DEBUG [RS:1;jenkins-hbase4:38071] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 15:16:40,541 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1504): Waiting on 111352044b1bd403da18db964c499c82, 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:40,543 DEBUG [RS:0;jenkins-hbase4:33739] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:40,543 INFO [RS:0;jenkins-hbase4:33739] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33739%2C1689261390578:(num 1689261391768) 2023-07-13 15:16:40,543 DEBUG [RS:0;jenkins-hbase4:33739] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:40,543 INFO [RS:0;jenkins-hbase4:33739] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:40,544 INFO [RS:0;jenkins-hbase4:33739] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:40,544 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:40,545 DEBUG [RS:1;jenkins-hbase4:38071] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:40,545 INFO [RS:1;jenkins-hbase4:38071] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38071%2C1689261390766:(num 1689261391768) 2023-07-13 15:16:40,545 DEBUG [RS:1;jenkins-hbase4:38071] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:40,545 INFO [RS:1;jenkins-hbase4:38071] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:40,545 INFO [RS:1;jenkins-hbase4:38071] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:40,545 INFO [RS:1;jenkins-hbase4:38071] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:40,545 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:40,545 INFO [RS:1;jenkins-hbase4:38071] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:40,545 INFO [RS:1;jenkins-hbase4:38071] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:40,547 INFO [RS:0;jenkins-hbase4:33739] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33739 2023-07-13 15:16:40,547 INFO [RS:1;jenkins-hbase4:38071] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38071 2023-07-13 15:16:40,551 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:40,551 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:40,551 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:40,551 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:40,551 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38071,1689261390766 2023-07-13 15:16:40,551 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:40,551 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:40,552 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:40,552 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:40,552 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33739,1689261390578 2023-07-13 15:16:40,553 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33739,1689261390578] 2023-07-13 15:16:40,553 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33739,1689261390578; numProcessing=1 2023-07-13 15:16:40,559 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33739,1689261390578 already deleted, retry=false 2023-07-13 15:16:40,559 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33739,1689261390578 expired; onlineServers=2 2023-07-13 15:16:40,559 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38071,1689261390766] 2023-07-13 15:16:40,559 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38071,1689261390766; numProcessing=2 2023-07-13 15:16:40,653 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:40,653 INFO [RS:1;jenkins-hbase4:38071] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38071,1689261390766; zookeeper connection closed. 2023-07-13 15:16:40,654 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:38071-0x1015f4159470012, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:40,654 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@108b70f0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@108b70f0 2023-07-13 15:16:40,655 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38071,1689261390766 already deleted, retry=false 2023-07-13 15:16:40,655 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38071,1689261390766 expired; onlineServers=1 2023-07-13 15:16:40,719 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-13 15:16:40,719 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-13 15:16:40,741 DEBUG [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1504): Waiting on 111352044b1bd403da18db964c499c82, 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:40,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=242 B at sequenceid=80 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/b38b6008019f46ea832797c52903ef60 2023-07-13 15:16:40,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/b38b6008019f46ea832797c52903ef60 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/b38b6008019f46ea832797c52903ef60 2023-07-13 15:16:40,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/b38b6008019f46ea832797c52903ef60, entries=2, sequenceid=80, filesize=5.0 K 2023-07-13 15:16:40,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~242 B/242, heapSize ~632 B/632, currentSize=0 B/0 for 111352044b1bd403da18db964c499c82 in 463ms, sequenceid=80, compaction requested=true 2023-07-13 15:16:40,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/recovered.edits/83.seqid, newMaxSeqId=83, maxSeqId=76 2023-07-13 15:16:40,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:40,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:40,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:40,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:40,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6f63fe7474be7b61966d8c0a666e0157, disabling compactions & flushes 2023-07-13 15:16:40,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:40,818 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:40,818 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. after waiting 0 ms 2023-07-13 15:16:40,818 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:40,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 15:16:40,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:40,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6f63fe7474be7b61966d8c0a666e0157: 2023-07-13 15:16:40,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:40,905 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:40,905 INFO [RS:0;jenkins-hbase4:33739] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33739,1689261390578; zookeeper connection closed. 2023-07-13 15:16:40,905 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33739-0x1015f4159470011, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:40,905 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@15205b7b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@15205b7b 2023-07-13 15:16:40,941 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43473,1689261390950; all regions closed. 2023-07-13 15:16:40,941 DEBUG [RS:2;jenkins-hbase4:43473] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 15:16:40,969 DEBUG [RS:2;jenkins-hbase4:43473] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:40,969 INFO [RS:2;jenkins-hbase4:43473] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43473%2C1689261390950:(num 1689261391768) 2023-07-13 15:16:40,970 DEBUG [RS:2;jenkins-hbase4:43473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:40,970 INFO [RS:2;jenkins-hbase4:43473] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:40,970 INFO [RS:2;jenkins-hbase4:43473] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:40,970 INFO [RS:2;jenkins-hbase4:43473] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:40,970 INFO [RS:2;jenkins-hbase4:43473] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:40,970 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:40,970 INFO [RS:2;jenkins-hbase4:43473] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:40,975 INFO [RS:2;jenkins-hbase4:43473] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43473 2023-07-13 15:16:40,977 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:40,977 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43473,1689261390950 2023-07-13 15:16:40,977 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43473,1689261390950] 2023-07-13 15:16:40,977 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43473,1689261390950; numProcessing=3 2023-07-13 15:16:40,982 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43473,1689261390950 already deleted, retry=false 2023-07-13 15:16:40,982 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43473,1689261390950 expired; onlineServers=0 2023-07-13 15:16:40,982 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46509,1689261390391' ***** 2023-07-13 15:16:40,982 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 15:16:40,982 DEBUG [M:0;jenkins-hbase4:46509] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77c11e4c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:40,983 INFO [M:0;jenkins-hbase4:46509] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:40,985 INFO [M:0;jenkins-hbase4:46509] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@77a33bbe{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:40,985 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:40,985 INFO [M:0;jenkins-hbase4:46509] server.AbstractConnector(383): Stopped ServerConnector@50727a01{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:40,985 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:40,985 INFO [M:0;jenkins-hbase4:46509] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:40,985 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:40,985 INFO [M:0;jenkins-hbase4:46509] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@40d99cb9{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:40,986 INFO [M:0;jenkins-hbase4:46509] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@615705c2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:40,986 INFO [M:0;jenkins-hbase4:46509] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46509,1689261390391 2023-07-13 15:16:40,986 INFO [M:0;jenkins-hbase4:46509] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46509,1689261390391; all regions closed. 2023-07-13 15:16:40,986 DEBUG [M:0;jenkins-hbase4:46509] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:40,986 INFO [M:0;jenkins-hbase4:46509] master.HMaster(1491): Stopping master jetty server 2023-07-13 15:16:40,987 INFO [M:0;jenkins-hbase4:46509] server.AbstractConnector(383): Stopped ServerConnector@1304bd11{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:40,987 DEBUG [M:0;jenkins-hbase4:46509] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 15:16:40,987 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 15:16:40,987 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261391489] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261391489,5,FailOnTimeoutGroup] 2023-07-13 15:16:40,987 DEBUG [M:0;jenkins-hbase4:46509] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 15:16:40,987 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261391490] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261391490,5,FailOnTimeoutGroup] 2023-07-13 15:16:40,988 INFO [M:0;jenkins-hbase4:46509] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 15:16:40,989 INFO [M:0;jenkins-hbase4:46509] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 15:16:40,989 INFO [M:0;jenkins-hbase4:46509] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:40,989 DEBUG [M:0;jenkins-hbase4:46509] master.HMaster(1512): Stopping service threads 2023-07-13 15:16:40,989 INFO [M:0;jenkins-hbase4:46509] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 15:16:40,989 ERROR [M:0;jenkins-hbase4:46509] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-13 15:16:40,990 INFO [M:0;jenkins-hbase4:46509] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 15:16:40,990 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 15:16:40,990 DEBUG [M:0;jenkins-hbase4:46509] zookeeper.ZKUtil(398): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 15:16:40,990 WARN [M:0;jenkins-hbase4:46509] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 15:16:40,990 INFO [M:0;jenkins-hbase4:46509] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 15:16:40,991 INFO [M:0;jenkins-hbase4:46509] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 15:16:40,991 DEBUG [M:0;jenkins-hbase4:46509] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:16:40,991 INFO [M:0;jenkins-hbase4:46509] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:40,991 DEBUG [M:0;jenkins-hbase4:46509] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:40,991 DEBUG [M:0;jenkins-hbase4:46509] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:16:40,991 DEBUG [M:0;jenkins-hbase4:46509] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:40,991 INFO [M:0;jenkins-hbase4:46509] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=45.28 KB heapSize=54.86 KB 2023-07-13 15:16:41,006 INFO [M:0;jenkins-hbase4:46509] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=45.28 KB at sequenceid=958 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f46b121caa0d47549a35ffac1c6907df 2023-07-13 15:16:41,013 DEBUG [M:0;jenkins-hbase4:46509] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/f46b121caa0d47549a35ffac1c6907df as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f46b121caa0d47549a35ffac1c6907df 2023-07-13 15:16:41,020 INFO [M:0;jenkins-hbase4:46509] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f46b121caa0d47549a35ffac1c6907df, entries=13, sequenceid=958, filesize=7.2 K 2023-07-13 15:16:41,020 INFO [M:0;jenkins-hbase4:46509] regionserver.HRegion(2948): Finished flush of dataSize ~45.28 KB/46367, heapSize ~54.84 KB/56160, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=958, compaction requested=false 2023-07-13 15:16:41,022 INFO [M:0;jenkins-hbase4:46509] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:41,022 DEBUG [M:0;jenkins-hbase4:46509] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:41,030 INFO [M:0;jenkins-hbase4:46509] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 15:16:41,030 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:41,031 INFO [M:0;jenkins-hbase4:46509] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46509 2023-07-13 15:16:41,032 DEBUG [M:0;jenkins-hbase4:46509] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46509,1689261390391 already deleted, retry=false 2023-07-13 15:16:41,078 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:41,078 INFO [RS:2;jenkins-hbase4:43473] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43473,1689261390950; zookeeper connection closed. 2023-07-13 15:16:41,079 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:43473-0x1015f4159470013, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:41,079 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@735dd6e0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@735dd6e0 2023-07-13 15:16:41,079 INFO [Listener at localhost/35161] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-13 15:16:41,179 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:41,179 INFO [M:0;jenkins-hbase4:46509] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46509,1689261390391; zookeeper connection closed. 2023-07-13 15:16:41,179 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46509-0x1015f4159470010, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:41,180 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(311): Sleeping a bit 2023-07-13 15:16:42,940 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:43,181 INFO [Listener at localhost/35161] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:43,182 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,182 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,182 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:43,182 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,182 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:43,182 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:43,183 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46327 2023-07-13 15:16:43,184 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:43,185 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:43,186 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46327 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:43,189 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:463270x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:43,190 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46327-0x1015f415947001c connected 2023-07-13 15:16:43,193 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:43,193 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:43,194 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:43,194 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46327 2023-07-13 15:16:43,194 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46327 2023-07-13 15:16:43,194 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46327 2023-07-13 15:16:43,198 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46327 2023-07-13 15:16:43,199 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46327 2023-07-13 15:16:43,200 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:43,200 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:43,200 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:43,201 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 15:16:43,201 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:43,201 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:43,201 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:43,202 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 46133 2023-07-13 15:16:43,202 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:43,203 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,203 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@279c5ff3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:43,203 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,203 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@31f6b9dd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:43,316 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:43,317 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:43,317 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:43,317 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:43,318 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,319 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3131b3ff{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-46133-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8023910684444233509/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:43,320 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@3a309945{HTTP/1.1, (http/1.1)}{0.0.0.0:46133} 2023-07-13 15:16:43,321 INFO [Listener at localhost/35161] server.Server(415): Started @43320ms 2023-07-13 15:16:43,321 INFO [Listener at localhost/35161] master.HMaster(444): hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046, hbase.cluster.distributed=false 2023-07-13 15:16:43,322 DEBUG [pool-523-thread-1] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: INIT 2023-07-13 15:16:43,333 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:43,334 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,334 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,334 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:43,334 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,334 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:43,334 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:43,335 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40629 2023-07-13 15:16:43,335 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:43,336 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:43,337 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:43,338 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:43,339 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40629 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:43,343 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:406290x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:43,344 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:406290x0, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:43,344 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40629-0x1015f415947001d connected 2023-07-13 15:16:43,344 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:43,345 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:43,345 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40629 2023-07-13 15:16:43,345 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40629 2023-07-13 15:16:43,346 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40629 2023-07-13 15:16:43,346 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40629 2023-07-13 15:16:43,346 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40629 2023-07-13 15:16:43,348 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:43,348 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:43,348 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:43,349 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:43,349 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:43,349 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:43,350 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:43,350 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 42333 2023-07-13 15:16:43,350 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:43,353 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,353 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7c837354{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:43,354 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,354 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4c13ea68{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:43,482 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:43,483 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:43,483 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:43,483 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:43,484 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,485 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@727136fa{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-42333-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2076401143008042855/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:43,488 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@6139a553{HTTP/1.1, (http/1.1)}{0.0.0.0:42333} 2023-07-13 15:16:43,488 INFO [Listener at localhost/35161] server.Server(415): Started @43488ms 2023-07-13 15:16:43,501 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:43,501 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,501 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,501 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:43,501 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,501 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:43,501 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:43,502 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35181 2023-07-13 15:16:43,503 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:43,504 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:43,505 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:43,506 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:43,508 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35181 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:43,512 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:351810x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:43,513 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:351810x0, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:43,514 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35181-0x1015f415947001e connected 2023-07-13 15:16:43,514 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:43,515 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:43,523 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35181 2023-07-13 15:16:43,523 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35181 2023-07-13 15:16:43,523 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35181 2023-07-13 15:16:43,524 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35181 2023-07-13 15:16:43,524 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35181 2023-07-13 15:16:43,526 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:43,526 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:43,526 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:43,527 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:43,527 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:43,527 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:43,527 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:43,528 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 34989 2023-07-13 15:16:43,528 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:43,535 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,535 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@307021bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:43,536 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,536 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f0f0e67{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:43,658 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:43,659 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:43,659 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:43,660 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 15:16:43,660 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,661 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1a339e43{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-34989-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1448261271516095395/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:43,663 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@4cb46a72{HTTP/1.1, (http/1.1)}{0.0.0.0:34989} 2023-07-13 15:16:43,663 INFO [Listener at localhost/35161] server.Server(415): Started @43663ms 2023-07-13 15:16:43,676 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:43,676 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,677 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,677 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:43,677 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:43,677 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:43,677 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:43,678 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35843 2023-07-13 15:16:43,679 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:43,682 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:43,682 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:43,684 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:43,686 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35843 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:43,691 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:358430x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:43,692 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:358430x0, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:43,693 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35843-0x1015f415947001f connected 2023-07-13 15:16:43,693 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:43,693 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:43,694 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35843 2023-07-13 15:16:43,694 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35843 2023-07-13 15:16:43,694 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35843 2023-07-13 15:16:43,697 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35843 2023-07-13 15:16:43,697 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35843 2023-07-13 15:16:43,699 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:43,699 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:43,699 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:43,700 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:43,700 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:43,700 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:43,700 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:43,700 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 41099 2023-07-13 15:16:43,701 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:43,703 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,703 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39f0e8d4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:43,704 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,704 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@42c1a7eb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:43,831 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:43,832 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:43,832 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:43,832 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:43,833 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:43,833 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3e9b0004{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-41099-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6891973958774374299/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:43,835 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@4b5beed5{HTTP/1.1, (http/1.1)}{0.0.0.0:41099} 2023-07-13 15:16:43,835 INFO [Listener at localhost/35161] server.Server(415): Started @43835ms 2023-07-13 15:16:43,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:43,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@38cd7617{HTTP/1.1, (http/1.1)}{0.0.0.0:34441} 2023-07-13 15:16:43,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43842ms 2023-07-13 15:16:43,842 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:43,843 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:43,844 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:43,845 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:43,845 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:43,845 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:43,846 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:43,847 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:43,847 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:43,849 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46327,1689261403181 from backup master directory 2023-07-13 15:16:43,850 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:43,852 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:43,852 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:43,852 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 15:16:43,852 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:43,867 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:43,902 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2e3dd715 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:43,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@79bce975, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:43,907 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 15:16:43,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 15:16:43,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:43,943 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(288): Renamed hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46509,1689261390391 to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46509,1689261390391-dead as it is dead 2023-07-13 15:16:43,944 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(86): Recover lease on dfs file hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46509,1689261390391-dead/jenkins-hbase4.apache.org%2C46509%2C1689261390391.1689261391255 2023-07-13 15:16:43,945 INFO [master/jenkins-hbase4:0:becomeActiveMaster] util.RecoverLeaseFSUtils(175): Recovered lease, attempt=0 on file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46509,1689261390391-dead/jenkins-hbase4.apache.org%2C46509%2C1689261390391.1689261391255 after 1ms 2023-07-13 15:16:43,946 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(300): Renamed hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46509,1689261390391-dead/jenkins-hbase4.apache.org%2C46509%2C1689261390391.1689261391255 to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C46509%2C1689261390391.1689261391255 2023-07-13 15:16:43,946 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(302): Delete empty local region wal dir hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46509,1689261390391-dead 2023-07-13 15:16:43,946 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:43,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46327%2C1689261403181, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46327,1689261403181, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/oldWALs, maxLogs=10 2023-07-13 15:16:43,962 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:43,962 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:43,962 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:43,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/WALs/jenkins-hbase4.apache.org,46327,1689261403181/jenkins-hbase4.apache.org%2C46327%2C1689261403181.1689261403949 2023-07-13 15:16:43,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:43,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:43,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:43,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:43,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:43,971 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:43,971 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 15:16:43,972 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 15:16:43,978 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a726e1ea1b1541a09f18088d5513b202 2023-07-13 15:16:43,982 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/f46b121caa0d47549a35ffac1c6907df 2023-07-13 15:16:43,982 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:43,983 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5179): Found 1 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals 2023-07-13 15:16:43,983 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5276): Replaying edits from hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C46509%2C1689261390391.1689261391255 2023-07-13 15:16:43,988 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5464): Applied 0, skipped 128, firstSequenceIdInLog=848, maxSequenceIdInLog=960, path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C46509%2C1689261390391.1689261391255 2023-07-13 15:16:43,989 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5086): Deleted recovered.edits file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.wals/jenkins-hbase4.apache.org%2C46509%2C1689261390391.1689261391255 2023-07-13 15:16:43,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 15:16:43,995 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/960.seqid, newMaxSeqId=960, maxSeqId=846 2023-07-13 15:16:43,995 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=961; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9817864480, jitterRate=-0.08564011752605438}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:43,996 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:43,996 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 15:16:43,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 15:16:43,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 15:16:43,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 15:16:43,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-13 15:16:44,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta 2023-07-13 15:16:44,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace 2023-07-13 15:16:44,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup 2023-07-13 15:16:44,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default 2023-07-13 15:16:44,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase 2023-07-13 15:16:44,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE 2023-07-13 15:16:44,026 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=15, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,33725,1689261367727, splitWal=true, meta=false 2023-07-13 15:16:44,026 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=16, state=SUCCESS; ModifyNamespaceProcedure, namespace=default 2023-07-13 15:16:44,026 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=17, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:44,026 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=20, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:44,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=23, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndAssign 2023-07-13 15:16:44,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=24, state=SUCCESS; CreateTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:44,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=45, state=SUCCESS; DisableTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:44,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=66, state=SUCCESS; DeleteTableProcedure table=Group_testCreateMultiRegion 2023-07-13 15:16:44,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=67, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE 2023-07-13 15:16:44,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=68, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 15:16:44,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=73, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:44,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=74, state=SUCCESS; CreateTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:44,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=77, state=SUCCESS; DisableTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:44,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=80, state=SUCCESS; DeleteTableProcedure table=Group_foo:Group_testCreateAndAssign 2023-07-13 15:16:44,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=81, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 15:16:44,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=82, state=SUCCESS; CreateTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:44,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=85, state=SUCCESS; DisableTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:44,028 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=88, state=SUCCESS; DeleteTableProcedure table=Group_testCreateAndDrop 2023-07-13 15:16:44,029 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=89, state=SUCCESS; CreateTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:44,029 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=92, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=EXCLUSIVE 2023-07-13 15:16:44,029 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=93, state=SUCCESS; org.apache.hadoop.hbase.master.locking.LockProcedure, tableName=Group_testCloneSnapshot, type=SHARED 2023-07-13 15:16:44,029 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=94, state=SUCCESS; CloneSnapshotProcedure (table=Group_testCloneSnapshot_clone snapshot=name: "Group_testCloneSnapshot_snap" table: "Group_testCloneSnapshot" creation_time: 1689261383767 type: FLUSH version: 2 ttl: 0 ) 2023-07-13 15:16:44,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=97, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:44,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=100, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot 2023-07-13 15:16:44,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=101, state=SUCCESS; DisableTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:44,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=104, state=SUCCESS; DeleteTableProcedure table=Group_testCloneSnapshot_clone 2023-07-13 15:16:44,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=105, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:44,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=106, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.HBaseIOException via master-create-table:org.apache.hadoop.hbase.HBaseIOException: No online servers in the rsgroup appInfo which table Group_ns:testCreateWhenRsgroupNoOnlineServers belongs to; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:44,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=107, state=SUCCESS; CreateTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:44,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=110, state=SUCCESS; DisableTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:44,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=113, state=SUCCESS; DeleteTableProcedure table=Group_ns:testCreateWhenRsgroupNoOnlineServers 2023-07-13 15:16:44,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=114, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_ns 2023-07-13 15:16:44,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=115, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,41955,1689261371593, splitWal=true, meta=false 2023-07-13 15:16:44,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=116, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,36737,1689261368119, splitWal=true, meta=true 2023-07-13 15:16:44,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=117, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,34275,1689261367926, splitWal=true, meta=false 2023-07-13 15:16:44,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=118, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,43693,1689261373307, splitWal=true, meta=false 2023-07-13 15:16:44,032 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(411): Completed pid=125, state=SUCCESS; CreateTableProcedure table=hbase:quota 2023-07-13 15:16:44,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 34 msec 2023-07-13 15:16:44,032 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 15:16:44,033 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [meta-region-server] 2023-07-13 15:16:44,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(272): Loaded hbase:meta state=OPEN, location=jenkins-hbase4.apache.org,33739,1689261390578, table=hbase:meta, region=1588230740 2023-07-13 15:16:44,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 3 possibly 'live' servers, and 0 'splitting'. 2023-07-13 15:16:44,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33739,1689261390578 already deleted, retry=false 2023-07-13 15:16:44,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,33739,1689261390578 on jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:44,038 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=128, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,33739,1689261390578, splitWal=true, meta=true 2023-07-13 15:16:44,038 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=128 for jenkins-hbase4.apache.org,33739,1689261390578 (carryingMeta=true) jenkins-hbase4.apache.org,33739,1689261390578/CRASHED/regionCount=1/lock=java.util.concurrent.locks.ReentrantReadWriteLock@45868641[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-13 15:16:44,040 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43473,1689261390950 already deleted, retry=false 2023-07-13 15:16:44,040 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,43473,1689261390950 on jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:44,041 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,43473,1689261390950, splitWal=true, meta=false 2023-07-13 15:16:44,041 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=129 for jenkins-hbase4.apache.org,43473,1689261390950 (carryingMeta=false) jenkins-hbase4.apache.org,43473,1689261390950/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@4c4fdc70[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-13 15:16:44,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38071,1689261390766 already deleted, retry=false 2023-07-13 15:16:44,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,38071,1689261390766 on jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:44,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=130, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,38071,1689261390766, splitWal=true, meta=false 2023-07-13 15:16:44,043 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=130 for jenkins-hbase4.apache.org,38071,1689261390766 (carryingMeta=false) jenkins-hbase4.apache.org,38071,1689261390766/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@6fbb7200[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-13 15:16:44,044 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/balancer 2023-07-13 15:16:44,044 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 15:16:44,046 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 15:16:44,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 15:16:44,049 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 15:16:44,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 15:16:44,052 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:44,052 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:44,052 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:44,052 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:44,052 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:44,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46327,1689261403181, sessionid=0x1015f415947001c, setting cluster-up flag (Was=false) 2023-07-13 15:16:44,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 15:16:44,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:44,063 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 15:16:44,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:44,065 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/.hbase-snapshot/.tmp 2023-07-13 15:16:44,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 15:16:44,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 15:16:44,071 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(511): Read ZK GroupInfo count:2 2023-07-13 15:16:44,073 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 15:16:44,074 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:44,074 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 15:16:44,080 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:44,081 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:33739 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:33739 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:44,083 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:33739 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:33739 2023-07-13 15:16:44,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:44,095 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:44,095 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 15:16:44,095 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 15:16:44,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:44,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:44,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:44,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-13 15:16:44,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-13 15:16:44,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:44,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689261434108 2023-07-13 15:16:44,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 15:16:44,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 15:16:44,109 DEBUG [PEWorker-2] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43473,1689261390950; numProcessing=1 2023-07-13 15:16:44,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 15:16:44,109 INFO [PEWorker-2] procedure.ServerCrashProcedure(161): Start pid=129, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43473,1689261390950, splitWal=true, meta=false 2023-07-13 15:16:44,109 DEBUG [PEWorker-1] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33739,1689261390578; numProcessing=2 2023-07-13 15:16:44,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 15:16:44,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 15:16:44,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 15:16:44,109 INFO [PEWorker-1] procedure.ServerCrashProcedure(161): Start pid=128, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,33739,1689261390578, splitWal=true, meta=true 2023-07-13 15:16:44,109 DEBUG [PEWorker-3] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38071,1689261390766; numProcessing=3 2023-07-13 15:16:44,110 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=130, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,38071,1689261390766, splitWal=true, meta=false 2023-07-13 15:16:44,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 15:16:44,111 INFO [PEWorker-1] procedure.ServerCrashProcedure(300): Splitting WALs pid=128, state=RUNNABLE:SERVER_CRASH_SPLIT_META_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,33739,1689261390578, splitWal=true, meta=true, isMeta: true 2023-07-13 15:16:44,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 15:16:44,112 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 15:16:44,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 15:16:44,119 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 15:16:44,125 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261404119,5,FailOnTimeoutGroup] 2023-07-13 15:16:44,125 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261404125,5,FailOnTimeoutGroup] 2023-07-13 15:16:44,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 15:16:44,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689261404125, completionTime=-1 2023-07-13 15:16:44,125 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(766): The value of 'hbase.master.wait.on.regionservers.maxtostart' (-1) is set less than 'hbase.master.wait.on.regionservers.mintostart' (1), ignoring. 2023-07-13 15:16:44,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=0; waited=0ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-13 15:16:44,126 DEBUG [PEWorker-1] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578-splitting 2023-07-13 15:16:44,127 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578-splitting dir is empty, no logs to split. 2023-07-13 15:16:44,127 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,33739,1689261390578 WAL count=0, meta=true 2023-07-13 15:16:44,129 INFO [PEWorker-1] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578-splitting dir is empty, no logs to split. 2023-07-13 15:16:44,129 INFO [PEWorker-1] master.SplitWALManager(106): jenkins-hbase4.apache.org,33739,1689261390578 WAL count=0, meta=true 2023-07-13 15:16:44,129 DEBUG [PEWorker-1] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,33739,1689261390578 WAL splitting is done? wals=0, meta=true 2023-07-13 15:16:44,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 15:16:44,131 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 15:16:44,132 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=128, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-13 15:16:44,137 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:44,137 DEBUG [RS:0;jenkins-hbase4:40629] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:44,138 INFO [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:44,138 DEBUG [RS:1;jenkins-hbase4:35181] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:44,138 INFO [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:44,138 DEBUG [RS:2;jenkins-hbase4:35843] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:44,140 DEBUG [RS:0;jenkins-hbase4:40629] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:44,140 DEBUG [RS:0;jenkins-hbase4:40629] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:44,140 DEBUG [RS:1;jenkins-hbase4:35181] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:44,140 DEBUG [RS:2;jenkins-hbase4:35843] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:44,141 DEBUG [RS:2;jenkins-hbase4:35843] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:44,140 DEBUG [RS:1;jenkins-hbase4:35181] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:44,143 DEBUG [RS:0;jenkins-hbase4:40629] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:44,144 DEBUG [RS:1;jenkins-hbase4:35181] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:44,145 DEBUG [RS:2;jenkins-hbase4:35843] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:44,146 DEBUG [RS:0;jenkins-hbase4:40629] zookeeper.ReadOnlyZKClient(139): Connect 0x38b5f8c1 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:44,146 DEBUG [RS:2;jenkins-hbase4:35843] zookeeper.ReadOnlyZKClient(139): Connect 0x313585c4 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:44,146 DEBUG [RS:1;jenkins-hbase4:35181] zookeeper.ReadOnlyZKClient(139): Connect 0x7de674e9 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:44,189 DEBUG [RS:0;jenkins-hbase4:40629] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@367211a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:44,189 DEBUG [RS:0;jenkins-hbase4:40629] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@324d93b1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:44,198 DEBUG [RS:0;jenkins-hbase4:40629] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:40629 2023-07-13 15:16:44,198 INFO [RS:0;jenkins-hbase4:40629] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:44,198 INFO [RS:0;jenkins-hbase4:40629] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:44,198 DEBUG [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:44,199 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46327,1689261403181 with isa=jenkins-hbase4.apache.org/172.31.14.131:40629, startcode=1689261403333 2023-07-13 15:16:44,199 DEBUG [RS:0;jenkins-hbase4:40629] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:44,199 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:33739 this server is in the failed servers list 2023-07-13 15:16:44,203 DEBUG [RS:2;jenkins-hbase4:35843] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ff26d92, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:44,203 DEBUG [RS:1;jenkins-hbase4:35181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@73219525, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:44,203 DEBUG [RS:2;jenkins-hbase4:35843] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3bcdeeb4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:44,203 DEBUG [RS:1;jenkins-hbase4:35181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@771e1f99, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:44,212 DEBUG [RS:2;jenkins-hbase4:35843] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:35843 2023-07-13 15:16:44,212 INFO [RS:2;jenkins-hbase4:35843] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:44,212 INFO [RS:2;jenkins-hbase4:35843] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:44,212 DEBUG [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:44,212 DEBUG [RS:1;jenkins-hbase4:35181] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35181 2023-07-13 15:16:44,212 INFO [RS:1;jenkins-hbase4:35181] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:44,212 INFO [RS:1;jenkins-hbase4:35181] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:44,212 DEBUG [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:44,212 INFO [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46327,1689261403181 with isa=jenkins-hbase4.apache.org/172.31.14.131:35843, startcode=1689261403676 2023-07-13 15:16:44,212 INFO [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46327,1689261403181 with isa=jenkins-hbase4.apache.org/172.31.14.131:35181, startcode=1689261403500 2023-07-13 15:16:44,213 DEBUG [RS:2;jenkins-hbase4:35843] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:44,213 DEBUG [RS:1;jenkins-hbase4:35181] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:44,213 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33795, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:44,217 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46327] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:44,218 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:44,218 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 15:16:44,219 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41857, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:44,219 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34127, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:44,219 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46327] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:44,219 DEBUG [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:44,219 DEBUG [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:44,220 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:44,220 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 15:16:44,220 DEBUG [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46133 2023-07-13 15:16:44,220 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46327] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,220 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:44,220 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 15:16:44,220 DEBUG [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:44,220 DEBUG [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:44,220 DEBUG [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46133 2023-07-13 15:16:44,222 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:44,223 DEBUG [RS:0;jenkins-hbase4:40629] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:44,223 WARN [RS:0;jenkins-hbase4:40629] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:44,223 INFO [RS:0;jenkins-hbase4:40629] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:44,223 DEBUG [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:44,224 DEBUG [RS:1;jenkins-hbase4:35181] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,224 DEBUG [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:44,224 WARN [RS:1;jenkins-hbase4:35181] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:44,224 DEBUG [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:44,224 INFO [RS:1;jenkins-hbase4:35181] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:44,224 DEBUG [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46133 2023-07-13 15:16:44,224 DEBUG [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,226 DEBUG [RS:2;jenkins-hbase4:35843] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:44,227 WARN [RS:2;jenkins-hbase4:35843] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:44,227 INFO [RS:2;jenkins-hbase4:35843] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:44,227 DEBUG [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:44,227 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35181,1689261403500] 2023-07-13 15:16:44,228 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40629,1689261403333] 2023-07-13 15:16:44,228 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35843,1689261403676] 2023-07-13 15:16:44,238 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=113ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=0ms 2023-07-13 15:16:44,259 DEBUG [RS:1;jenkins-hbase4:35181] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:44,259 DEBUG [RS:2;jenkins-hbase4:35843] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:44,259 DEBUG [RS:0;jenkins-hbase4:40629] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:44,259 DEBUG [RS:1;jenkins-hbase4:35181] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,259 DEBUG [RS:0;jenkins-hbase4:40629] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,259 DEBUG [RS:2;jenkins-hbase4:35843] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,260 DEBUG [RS:1;jenkins-hbase4:35181] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:44,260 DEBUG [RS:0;jenkins-hbase4:40629] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:44,263 DEBUG [RS:0;jenkins-hbase4:40629] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:44,263 DEBUG [RS:1;jenkins-hbase4:35181] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:44,263 INFO [RS:0;jenkins-hbase4:40629] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:44,263 INFO [RS:1;jenkins-hbase4:35181] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:44,265 INFO [RS:0;jenkins-hbase4:40629] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:44,265 INFO [RS:0;jenkins-hbase4:40629] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:44,265 INFO [RS:0;jenkins-hbase4:40629] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,265 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:44,267 DEBUG [RS:2;jenkins-hbase4:35843] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:44,268 DEBUG [RS:2;jenkins-hbase4:35843] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:44,268 INFO [RS:2;jenkins-hbase4:35843] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:44,270 INFO [RS:1;jenkins-hbase4:35181] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:44,274 INFO [RS:0;jenkins-hbase4:40629] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,274 INFO [RS:2;jenkins-hbase4:35843] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:44,275 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,275 INFO [RS:1;jenkins-hbase4:35181] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:44,275 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,275 INFO [RS:2;jenkins-hbase4:35843] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:44,275 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,275 INFO [RS:1;jenkins-hbase4:35181] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,275 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,275 INFO [RS:2;jenkins-hbase4:35843] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,275 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,275 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:44,275 INFO [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:44,279 INFO [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:44,275 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,280 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,280 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,280 DEBUG [RS:0;jenkins-hbase4:40629] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,281 INFO [RS:0;jenkins-hbase4:40629] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,281 INFO [RS:0;jenkins-hbase4:40629] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,281 INFO [RS:2;jenkins-hbase4:35843] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,281 INFO [RS:0;jenkins-hbase4:40629] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,282 DEBUG [RS:2;jenkins-hbase4:35843] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,283 INFO [RS:1;jenkins-hbase4:35181] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,283 DEBUG [jenkins-hbase4:46327] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 15:16:44,286 INFO [RS:2;jenkins-hbase4:35843] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,286 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,286 INFO [RS:2;jenkins-hbase4:35843] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,287 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,287 INFO [RS:2;jenkins-hbase4:35843] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,287 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,286 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:44,287 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,287 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,287 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:44,287 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:44,287 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:44,287 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:44,287 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:44,288 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,288 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,288 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35181,1689261403500, state=OPENING 2023-07-13 15:16:44,288 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,288 DEBUG [RS:1;jenkins-hbase4:35181] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:44,291 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:44,291 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:44,291 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=132, ppid=131, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35181,1689261403500}] 2023-07-13 15:16:44,294 INFO [RS:1;jenkins-hbase4:35181] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,296 INFO [RS:1;jenkins-hbase4:35181] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,296 INFO [RS:1;jenkins-hbase4:35181] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,298 INFO [RS:0;jenkins-hbase4:40629] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:44,298 INFO [RS:0;jenkins-hbase4:40629] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40629,1689261403333-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,308 INFO [RS:2;jenkins-hbase4:35843] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:44,308 INFO [RS:2;jenkins-hbase4:35843] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35843,1689261403676-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,312 INFO [RS:1;jenkins-hbase4:35181] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:44,313 INFO [RS:1;jenkins-hbase4:35181] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35181,1689261403500-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:44,322 INFO [RS:2;jenkins-hbase4:35843] regionserver.Replication(203): jenkins-hbase4.apache.org,35843,1689261403676 started 2023-07-13 15:16:44,323 INFO [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35843,1689261403676, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35843, sessionid=0x1015f415947001f 2023-07-13 15:16:44,324 INFO [RS:0;jenkins-hbase4:40629] regionserver.Replication(203): jenkins-hbase4.apache.org,40629,1689261403333 started 2023-07-13 15:16:44,327 DEBUG [RS:2;jenkins-hbase4:35843] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:44,327 DEBUG [RS:2;jenkins-hbase4:35843] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:44,327 DEBUG [RS:2;jenkins-hbase4:35843] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35843,1689261403676' 2023-07-13 15:16:44,328 DEBUG [RS:2;jenkins-hbase4:35843] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:44,327 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40629,1689261403333, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40629, sessionid=0x1015f415947001d 2023-07-13 15:16:44,328 DEBUG [RS:0;jenkins-hbase4:40629] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:44,328 DEBUG [RS:0;jenkins-hbase4:40629] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:44,328 DEBUG [RS:0;jenkins-hbase4:40629] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40629,1689261403333' 2023-07-13 15:16:44,328 DEBUG [RS:0;jenkins-hbase4:40629] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:44,328 DEBUG [RS:2;jenkins-hbase4:35843] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:44,328 DEBUG [RS:0;jenkins-hbase4:40629] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:44,329 DEBUG [RS:2;jenkins-hbase4:35843] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:44,329 DEBUG [RS:2;jenkins-hbase4:35843] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:44,329 DEBUG [RS:0;jenkins-hbase4:40629] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:44,329 DEBUG [RS:0;jenkins-hbase4:40629] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:44,329 DEBUG [RS:2;jenkins-hbase4:35843] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:44,329 DEBUG [RS:2;jenkins-hbase4:35843] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35843,1689261403676' 2023-07-13 15:16:44,329 DEBUG [RS:0;jenkins-hbase4:40629] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:44,329 DEBUG [RS:0;jenkins-hbase4:40629] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40629,1689261403333' 2023-07-13 15:16:44,329 DEBUG [RS:0;jenkins-hbase4:40629] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:44,329 DEBUG [RS:2;jenkins-hbase4:35843] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:44,329 DEBUG [RS:0;jenkins-hbase4:40629] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:44,329 DEBUG [RS:2;jenkins-hbase4:35843] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:44,330 DEBUG [RS:0;jenkins-hbase4:40629] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:44,330 INFO [RS:0;jenkins-hbase4:40629] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:44,330 DEBUG [RS:2;jenkins-hbase4:35843] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:44,330 INFO [RS:0;jenkins-hbase4:40629] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:44,330 INFO [RS:2;jenkins-hbase4:35843] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:44,330 INFO [RS:2;jenkins-hbase4:35843] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:44,331 INFO [RS:1;jenkins-hbase4:35181] regionserver.Replication(203): jenkins-hbase4.apache.org,35181,1689261403500 started 2023-07-13 15:16:44,331 INFO [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35181,1689261403500, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35181, sessionid=0x1015f415947001e 2023-07-13 15:16:44,332 DEBUG [RS:1;jenkins-hbase4:35181] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:44,332 DEBUG [RS:1;jenkins-hbase4:35181] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,332 DEBUG [RS:1;jenkins-hbase4:35181] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35181,1689261403500' 2023-07-13 15:16:44,332 DEBUG [RS:1;jenkins-hbase4:35181] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:44,332 DEBUG [RS:1;jenkins-hbase4:35181] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:44,332 DEBUG [RS:1;jenkins-hbase4:35181] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:44,332 DEBUG [RS:1;jenkins-hbase4:35181] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:44,332 DEBUG [RS:1;jenkins-hbase4:35181] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,333 DEBUG [RS:1;jenkins-hbase4:35181] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35181,1689261403500' 2023-07-13 15:16:44,333 DEBUG [RS:1;jenkins-hbase4:35181] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:44,333 DEBUG [RS:1;jenkins-hbase4:35181] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:44,333 DEBUG [RS:1;jenkins-hbase4:35181] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:44,333 INFO [RS:1;jenkins-hbase4:35181] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:44,333 INFO [RS:1;jenkins-hbase4:35181] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:44,401 WARN [ReadOnlyZKClient-127.0.0.1:56695@0x2e3dd715] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 15:16:44,401 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:44,403 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47362, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:44,403 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35181] ipc.CallRunner(144): callId: 2 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:47362 deadline: 1689261464403, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,432 INFO [RS:2;jenkins-hbase4:35843] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35843%2C1689261403676, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,35843,1689261403676, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:44,432 INFO [RS:0;jenkins-hbase4:40629] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40629%2C1689261403333, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,40629,1689261403333, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:44,435 INFO [RS:1;jenkins-hbase4:35181] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35181%2C1689261403500, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,35181,1689261403500, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:44,450 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:44,452 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:44,453 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:44,453 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:44,453 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:44,457 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47376, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:44,462 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:44,462 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:44,462 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:44,464 INFO [RS:2;jenkins-hbase4:35843] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,35843,1689261403676/jenkins-hbase4.apache.org%2C35843%2C1689261403676.1689261404433 2023-07-13 15:16:44,466 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:44,466 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:44,466 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:44,466 DEBUG [RS:2;jenkins-hbase4:35843] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:44,476 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 15:16:44,476 INFO [RS:0;jenkins-hbase4:40629] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,40629,1689261403333/jenkins-hbase4.apache.org%2C40629%2C1689261403333.1689261404433 2023-07-13 15:16:44,476 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:44,476 DEBUG [RS:0;jenkins-hbase4:40629] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:44,477 INFO [RS:1;jenkins-hbase4:35181] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,35181,1689261403500/jenkins-hbase4.apache.org%2C35181%2C1689261403500.1689261404436 2023-07-13 15:16:44,478 DEBUG [RS:1;jenkins-hbase4:35181] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:44,478 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35181%2C1689261403500.meta, suffix=.meta, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,35181,1689261403500, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:44,492 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:44,492 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:44,492 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:44,494 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,35181,1689261403500/jenkins-hbase4.apache.org%2C35181%2C1689261403500.meta.1689261404479.meta 2023-07-13 15:16:44,494 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:44,494 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:44,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:44,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 15:16:44,495 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 15:16:44,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 15:16:44,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:44,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 15:16:44,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 15:16:44,498 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:44,499 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:44,499 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:44,499 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:44,506 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3e2c8a0d327b43a89086a648a1aed48b 2023-07-13 15:16:44,506 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/3e2c8a0d327b43a89086a648a1aed48b 2023-07-13 15:16:44,511 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:44,514 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:44,519 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/e331f406b6624624a6f0dd5ce8e3b5ca 2023-07-13 15:16:44,520 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:44,520 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:44,521 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:44,521 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:44,522 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:44,529 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:44,529 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:44,533 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:44,533 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:44,533 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:44,534 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:44,535 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:44,535 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:44,535 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:44,547 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:44,547 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:44,552 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d67a1478638944cbab7e2c10f09f1d65 2023-07-13 15:16:44,552 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/d67a1478638944cbab7e2c10f09f1d65 2023-07-13 15:16:44,557 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/fac7b86466ef4efabec576fae39af302 2023-07-13 15:16:44,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:44,558 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:44,559 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:44,562 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:44,563 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:44,564 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=167; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10784786880, jitterRate=0.004411548376083374}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:44,564 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:44,567 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=132, masterSystemTime=1689261404450 2023-07-13 15:16:44,569 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-13 15:16:44,570 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-13 15:16:44,572 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-13 15:16:44,572 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-13 15:16:44,575 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 15:16:44,576 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 15:16:44,577 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35181,1689261403500, state=OPEN 2023-07-13 15:16:44,578 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 25595 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-13 15:16:44,578 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16964 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-13 15:16:44,579 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:44,579 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:44,581 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] regionserver.HStore(1912): 1588230740/table is initiating minor compaction (all files) 2023-07-13 15:16:44,581 INFO [RS:1;jenkins-hbase4:35181-longCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/table in hbase:meta,,1.1588230740 2023-07-13 15:16:44,581 INFO [RS:1;jenkins-hbase4:35181-longCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/3f9c4b698d1c4d0292338c1574eb859a, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/d67a1478638944cbab7e2c10f09f1d65, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/fac7b86466ef4efabec576fae39af302] into tmpdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp, totalSize=16.6 K 2023-07-13 15:16:44,582 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] regionserver.HStore(1912): 1588230740/info is initiating minor compaction (all files) 2023-07-13 15:16:44,582 INFO [RS:1;jenkins-hbase4:35181-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 1588230740/info in hbase:meta,,1.1588230740 2023-07-13 15:16:44,582 INFO [RS:1;jenkins-hbase4:35181-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/6c4d1a30fa324b9292b3c505317b9f7f, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/3e2c8a0d327b43a89086a648a1aed48b, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/e331f406b6624624a6f0dd5ce8e3b5ca] into tmpdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp, totalSize=25.0 K 2023-07-13 15:16:44,582 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] compactions.Compactor(207): Compacting 3f9c4b698d1c4d0292338c1574eb859a, keycount=17, bloomtype=NONE, size=6.2 K, encoding=NONE, compression=NONE, seqNum=79, earliestPutTs=1689261370840 2023-07-13 15:16:44,582 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] compactions.Compactor(207): Compacting 6c4d1a30fa324b9292b3c505317b9f7f, keycount=48, bloomtype=NONE, size=10.2 K, encoding=NONE, compression=NONE, seqNum=79, earliestPutTs=1689261370941 2023-07-13 15:16:44,582 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=132, resume processing ppid=131 2023-07-13 15:16:44,583 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=131, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35181,1689261403500 in 289 msec 2023-07-13 15:16:44,583 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] compactions.Compactor(207): Compacting 3e2c8a0d327b43a89086a648a1aed48b, keycount=20, bloomtype=NONE, size=7.1 K, encoding=NONE, compression=NONE, seqNum=148, earliestPutTs=1689261378925 2023-07-13 15:16:44,583 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] compactions.Compactor(207): Compacting d67a1478638944cbab7e2c10f09f1d65, keycount=10, bloomtype=NONE, size=5.7 K, encoding=NONE, compression=NONE, seqNum=148, earliestPutTs=9223372036854775807 2023-07-13 15:16:44,583 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] compactions.Compactor(207): Compacting e331f406b6624624a6f0dd5ce8e3b5ca, keycount=26, bloomtype=NONE, size=7.7 K, encoding=NONE, compression=NONE, seqNum=163, earliestPutTs=1689261396087 2023-07-13 15:16:44,584 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] compactions.Compactor(207): Compacting fac7b86466ef4efabec576fae39af302, keycount=2, bloomtype=NONE, size=4.7 K, encoding=NONE, compression=NONE, seqNum=163, earliestPutTs=1689261397117 2023-07-13 15:16:44,584 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=128 2023-07-13 15:16:44,584 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=128, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 453 msec 2023-07-13 15:16:44,614 INFO [RS:1;jenkins-hbase4:35181-longCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#table#compaction#15 average throughput is 0.26 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-13 15:16:44,615 INFO [RS:1;jenkins-hbase4:35181-shortCompactions-0] throttle.PressureAwareThroughputController(145): 1588230740#info#compaction#16 average throughput is 5.17 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-13 15:16:44,642 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/table/f2795f71130c4d9983d361b9601ad937 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/f2795f71130c4d9983d361b9601ad937 2023-07-13 15:16:44,646 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/98204a5af292435fbf3f542503f1d20a as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/98204a5af292435fbf3f542503f1d20a 2023-07-13 15:16:44,656 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 15:16:44,656 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 15:16:44,658 INFO [RS:1;jenkins-hbase4:35181-longCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/table of 1588230740 into f2795f71130c4d9983d361b9601ad937(size=4.9 K), total size for store is 4.9 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-13 15:16:44,658 INFO [RS:1;jenkins-hbase4:35181-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 1588230740/info of 1588230740 into 98204a5af292435fbf3f542503f1d20a(size=10.1 K), total size for store is 10.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-13 15:16:44,658 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-13 15:16:44,658 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 1588230740: 2023-07-13 15:16:44,658 INFO [RS:1;jenkins-hbase4:35181-longCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/table, priority=13, startTime=1689261404569; duration=0sec 2023-07-13 15:16:44,658 INFO [RS:1;jenkins-hbase4:35181-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:meta,,1.1588230740, storeName=1588230740/info, priority=13, startTime=1689261404568; duration=0sec 2023-07-13 15:16:44,659 DEBUG [RS:1;jenkins-hbase4:35181-longCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-13 15:16:44,659 DEBUG [RS:1;jenkins-hbase4:35181-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-13 15:16:44,716 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:44,716 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:43473 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43473 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:44,717 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:43473 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43473 2023-07-13 15:16:44,821 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43473 this server is in the failed servers list 2023-07-13 15:16:45,026 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43473 this server is in the failed servers list 2023-07-13 15:16:45,330 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43473 this server is in the failed servers list 2023-07-13 15:16:45,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=1616ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=1503ms 2023-07-13 15:16:45,835 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] ipc.AbstractRpcClient(347): Not trying to connect to jenkins-hbase4.apache.org/172.31.14.131:43473 this server is in the failed servers list 2023-07-13 15:16:45,996 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-13 15:16:46,846 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:43473 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43473 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:46,848 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:43473 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43473 2023-07-13 15:16:47,243 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(801): Waiting on regionserver count=3; waited=3118ms, expecting min=1 server(s), max=NO_LIMIT server(s), timeout=4500ms, lastChange=3005ms 2023-07-13 15:16:48,646 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=4521ms, expected min=1 server(s), max=NO_LIMIT server(s), master is running 2023-07-13 15:16:48,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 15:16:48,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,38071,1689261390766, regionLocation=jenkins-hbase4.apache.org,38071,1689261390766, openSeqNum=21 2023-07-13 15:16:48,648 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=6f63fe7474be7b61966d8c0a666e0157, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,43473,1689261390950, regionLocation=jenkins-hbase4.apache.org,43473,1689261390950, openSeqNum=2 2023-07-13 15:16:48,649 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.RegionStateStore(147): Load hbase:meta entry region=111352044b1bd403da18db964c499c82, regionState=OPEN, lastHost=jenkins-hbase4.apache.org,43473,1689261390950, regionLocation=jenkins-hbase4.apache.org,43473,1689261390950, openSeqNum=77 2023-07-13 15:16:48,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 15:16:48,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689261468649 2023-07-13 15:16:48,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689261528649 2023-07-13 15:16:48,649 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 2 msec 2023-07-13 15:16:48,667 INFO [PEWorker-5] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,33739,1689261390578 had 1 regions 2023-07-13 15:16:48,667 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,43473,1689261390950 had 2 regions 2023-07-13 15:16:48,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46327,1689261403181-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:48,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46327,1689261403181-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:48,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46327,1689261403181-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:48,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46327, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:48,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:48,668 WARN [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1240): hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. is NOT online; state={8f9b3c3c0c701a7e057738cfe2a31027 state=OPEN, ts=1689261408648, server=jenkins-hbase4.apache.org,38071,1689261390766}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2023-07-13 15:16:48,667 INFO [PEWorker-2] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,38071,1689261390766 had 1 regions 2023-07-13 15:16:48,670 INFO [PEWorker-5] procedure.ServerCrashProcedure(300): Splitting WALs pid=128, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,33739,1689261390578, splitWal=true, meta=true, isMeta: false 2023-07-13 15:16:48,670 INFO [PEWorker-2] procedure.ServerCrashProcedure(300): Splitting WALs pid=130, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,38071,1689261390766, splitWal=true, meta=false, isMeta: false 2023-07-13 15:16:48,670 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=129, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,43473,1689261390950, splitWal=true, meta=false, isMeta: false 2023-07-13 15:16:48,673 WARN [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(172): unknown_server=jenkins-hbase4.apache.org,38071,1689261390766/hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027., unknown_server=jenkins-hbase4.apache.org,43473,1689261390950/hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157., unknown_server=jenkins-hbase4.apache.org,43473,1689261390950/hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:48,673 DEBUG [PEWorker-2] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,38071,1689261390766-splitting 2023-07-13 15:16:48,674 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578-splitting dir is empty, no logs to split. 2023-07-13 15:16:48,674 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,33739,1689261390578 WAL count=0, meta=false 2023-07-13 15:16:48,675 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,38071,1689261390766-splitting dir is empty, no logs to split. 2023-07-13 15:16:48,675 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,38071,1689261390766 WAL count=0, meta=false 2023-07-13 15:16:48,675 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43473,1689261390950-splitting 2023-07-13 15:16:48,676 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43473,1689261390950-splitting dir is empty, no logs to split. 2023-07-13 15:16:48,676 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,43473,1689261390950 WAL count=0, meta=false 2023-07-13 15:16:48,676 INFO [PEWorker-5] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33739,1689261390578-splitting dir is empty, no logs to split. 2023-07-13 15:16:48,676 INFO [PEWorker-5] master.SplitWALManager(106): jenkins-hbase4.apache.org,33739,1689261390578 WAL count=0, meta=false 2023-07-13 15:16:48,676 DEBUG [PEWorker-5] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,33739,1689261390578 WAL splitting is done? wals=0, meta=false 2023-07-13 15:16:48,678 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,33739,1689261390578 after splitting done 2023-07-13 15:16:48,678 INFO [PEWorker-2] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,38071,1689261390766-splitting dir is empty, no logs to split. 2023-07-13 15:16:48,678 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase4.apache.org,33739,1689261390578 from processing; numProcessing=2 2023-07-13 15:16:48,678 INFO [PEWorker-2] master.SplitWALManager(106): jenkins-hbase4.apache.org,38071,1689261390766 WAL count=0, meta=false 2023-07-13 15:16:48,678 DEBUG [PEWorker-2] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,38071,1689261390766 WAL splitting is done? wals=0, meta=false 2023-07-13 15:16:48,679 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43473,1689261390950-splitting dir is empty, no logs to split. 2023-07-13 15:16:48,679 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,43473,1689261390950 WAL count=0, meta=false 2023-07-13 15:16:48,679 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,43473,1689261390950 WAL splitting is done? wals=0, meta=false 2023-07-13 15:16:48,680 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=128, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,33739,1689261390578, splitWal=true, meta=true in 4.6410 sec 2023-07-13 15:16:48,683 INFO [PEWorker-2] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,38071,1689261390766 failed, ignore...File hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,38071,1689261390766-splitting does not exist. 2023-07-13 15:16:48,684 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,43473,1689261390950 failed, ignore...File hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,43473,1689261390950-splitting does not exist. 2023-07-13 15:16:48,684 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=130, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN}] 2023-07-13 15:16:48,685 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN}, {pid=135, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, ASSIGN}] 2023-07-13 15:16:48,685 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=130, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN 2023-07-13 15:16:48,685 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=130, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-13 15:16:48,685 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN 2023-07-13 15:16:48,685 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, ASSIGN 2023-07-13 15:16:48,686 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-13 15:16:48,686 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN; state=OPEN, location=null; forceNewPlan=true, retain=false 2023-07-13 15:16:48,686 DEBUG [jenkins-hbase4:46327] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 15:16:48,686 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:48,687 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:48,687 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:48,687 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:48,687 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(378): Number of tables=2, number of hosts=1, number of racks=1 2023-07-13 15:16:48,688 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=6f63fe7474be7b61966d8c0a666e0157, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:48,688 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:48,688 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261408688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261408688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261408688"}]},"ts":"1689261408688"} 2023-07-13 15:16:48,688 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261408688"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261408688"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261408688"}]},"ts":"1689261408688"} 2023-07-13 15:16:48,690 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE; OpenRegionProcedure 6f63fe7474be7b61966d8c0a666e0157, server=jenkins-hbase4.apache.org,35181,1689261403500}] 2023-07-13 15:16:48,691 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=133, state=RUNNABLE; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,35843,1689261403676}] 2023-07-13 15:16:48,838 DEBUG [jenkins-hbase4:46327] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 15:16:48,839 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-13 15:16:48,839 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 15:16:48,839 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 15:16:48,839 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 15:16:48,839 DEBUG [jenkins-hbase4:46327] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 15:16:48,840 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:48,841 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261408840"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261408840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261408840"}]},"ts":"1689261408840"} 2023-07-13 15:16:48,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=134, state=RUNNABLE; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,35843,1689261403676}] 2023-07-13 15:16:48,843 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:48,843 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:48,845 INFO [RS-EventLoopGroup-16-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44292, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:48,848 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:48,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6f63fe7474be7b61966d8c0a666e0157, NAME => 'hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:48,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:48,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:48,849 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:48,849 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:48,851 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:48,852 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:48,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f9b3c3c0c701a7e057738cfe2a31027, NAME => 'hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:48,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:48,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:48,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:48,852 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:48,852 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/q 2023-07-13 15:16:48,852 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/q 2023-07-13 15:16:48,853 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6f63fe7474be7b61966d8c0a666e0157 columnFamilyName q 2023-07-13 15:16:48,853 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(310): Store=6f63fe7474be7b61966d8c0a666e0157/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:48,853 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:48,853 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:48,854 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:48,854 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/u 2023-07-13 15:16:48,854 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/u 2023-07-13 15:16:48,854 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:48,855 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6f63fe7474be7b61966d8c0a666e0157 columnFamilyName u 2023-07-13 15:16:48,855 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f9b3c3c0c701a7e057738cfe2a31027 columnFamilyName info 2023-07-13 15:16:48,855 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(310): Store=6f63fe7474be7b61966d8c0a666e0157/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:48,856 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:48,857 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:48,860 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-13 15:16:48,861 WARN [RS-EventLoopGroup-16-3] ipc.NettyRpcConnection$2(294): Exception encountered while connecting to the server jenkins-hbase4.apache.org/172.31.14.131:43473 org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43473 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:48,862 DEBUG [RS-EventLoopGroup-16-3] ipc.FailedServers(52): Added failed server with address jenkins-hbase4.apache.org/172.31.14.131:43473 to list caused by org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43473 2023-07-13 15:16:48,863 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] client.RpcRetryingCallerImpl(129): Call exception, tries=6, retries=46, started=4152 ms ago, cancelled=false, msg=Call to address=jenkins-hbase4.apache.org/172.31.14.131:43473 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43473, details=row '\x00' on table 'hbase:rsgroup' at region=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., hostname=jenkins-hbase4.apache.org,43473,1689261390950, seqNum=77, see https://s.apache.org/timeout, exception=java.net.ConnectException: Call to address=jenkins-hbase4.apache.org/172.31.14.131:43473 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43473 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:186) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:385) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:99) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:398) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:368) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1428) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:396) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:376) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:913) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:195) at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$300(NettyRpcConnection.java:76) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:296) at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:287) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:674) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:693) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: jenkins-hbase4.apache.org/172.31.14.131:43473 Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155) at org.apache.hbase.thirdparty.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128) at org.apache.hbase.thirdparty.io.netty.channel.unix.Socket.finishConnect(Socket.java:359) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 15:16:48,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:48,864 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6f63fe7474be7b61966d8c0a666e0157; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9798490080, jitterRate=-0.0874444991350174}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-13 15:16:48,864 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6f63fe7474be7b61966d8c0a666e0157: 2023-07-13 15:16:48,864 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:48,864 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:48,865 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157., pid=136, masterSystemTime=1689261408843 2023-07-13 15:16:48,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:48,868 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:48,868 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=6f63fe7474be7b61966d8c0a666e0157, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:48,868 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261408868"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261408868"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261408868"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261408868"}]},"ts":"1689261408868"} 2023-07-13 15:16:48,871 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/8877d21c56c24ede9d59119e77b5fd77 2023-07-13 15:16:48,871 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(310): Store=8f9b3c3c0c701a7e057738cfe2a31027/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:48,871 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=135 2023-07-13 15:16:48,871 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; OpenRegionProcedure 6f63fe7474be7b61966d8c0a666e0157, server=jenkins-hbase4.apache.org,35181,1689261403500 in 180 msec 2023-07-13 15:16:48,872 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:48,874 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, ASSIGN in 186 msec 2023-07-13 15:16:48,874 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:48,878 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:48,879 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8f9b3c3c0c701a7e057738cfe2a31027; next sequenceid=24; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10110333280, jitterRate=-0.058401837944984436}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:48,879 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:48,880 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027., pid=137, masterSystemTime=1689261408843 2023-07-13 15:16:48,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:48,884 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:48,884 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPEN, openSeqNum=24, regionLocation=jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:48,884 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261408884"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261408884"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261408884"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261408884"}]},"ts":"1689261408884"} 2023-07-13 15:16:48,887 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=133 2023-07-13 15:16:48,887 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=133, state=SUCCESS; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,35843,1689261403676 in 194 msec 2023-07-13 15:16:48,889 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=130 2023-07-13 15:16:48,889 INFO [PEWorker-2] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,38071,1689261390766 after splitting done 2023-07-13 15:16:48,889 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=130, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, ASSIGN in 203 msec 2023-07-13 15:16:48,889 DEBUG [PEWorker-2] master.DeadServer(114): Removed jenkins-hbase4.apache.org,38071,1689261390766 from processing; numProcessing=1 2023-07-13 15:16:48,890 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,38071,1689261390766, splitWal=true, meta=false in 4.8460 sec 2023-07-13 15:16:49,001 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:49,001 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 111352044b1bd403da18db964c499c82, NAME => 'hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:49,002 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:49,002 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. service=MultiRowMutationService 2023-07-13 15:16:49,002 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 15:16:49,002 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 111352044b1bd403da18db964c499c82 2023-07-13 15:16:49,002 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:49,002 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:49,002 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:49,004 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 111352044b1bd403da18db964c499c82 2023-07-13 15:16:49,005 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:49,005 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:49,005 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 111352044b1bd403da18db964c499c82 columnFamilyName m 2023-07-13 15:16:49,013 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/9cc85fdd31c84a01b1065fb63289ca00 2023-07-13 15:16:49,018 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a4b8548565d54812ae823f3bc7af5c62 2023-07-13 15:16:49,018 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/a4b8548565d54812ae823f3bc7af5c62 2023-07-13 15:16:49,022 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/b38b6008019f46ea832797c52903ef60 2023-07-13 15:16:49,022 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(310): Store=111352044b1bd403da18db964c499c82/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:49,023 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:49,024 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:49,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:49,028 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 111352044b1bd403da18db964c499c82; next sequenceid=84; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@31663802, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:49,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:49,028 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., pid=138, masterSystemTime=1689261408997 2023-07-13 15:16:49,029 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.CompactSplit(385): Small Compaction requested: system; Because: Opening Region; compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-13 15:16:49,030 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] compactions.SortedCompactionPolicy(75): Selecting compaction from 3 store files, 0 compacting, 3 eligible, 16 blocking 2023-07-13 15:16:49,031 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:49,031 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] compactions.ExploringCompactionPolicy(116): Exploring compaction algorithm has selected 3 files of size 16056 starting at candidate #0 after considering 1 permutations with 1 in ratio 2023-07-13 15:16:49,031 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:49,031 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] regionserver.HStore(1912): 111352044b1bd403da18db964c499c82/m is initiating minor compaction (all files) 2023-07-13 15:16:49,031 INFO [RS:2;jenkins-hbase4:35843-shortCompactions-0] regionserver.HRegion(2259): Starting compaction of 111352044b1bd403da18db964c499c82/m in hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:49,032 INFO [RS:2;jenkins-hbase4:35843-shortCompactions-0] regionserver.HStore(1468): Starting compaction of [hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/9cc85fdd31c84a01b1065fb63289ca00, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/a4b8548565d54812ae823f3bc7af5c62, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/b38b6008019f46ea832797c52903ef60] into tmpdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp, totalSize=15.7 K 2023-07-13 15:16:49,032 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPEN, openSeqNum=84, regionLocation=jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:49,032 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261409032"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261409032"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261409032"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261409032"}]},"ts":"1689261409032"} 2023-07-13 15:16:49,032 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] compactions.Compactor(207): Compacting 9cc85fdd31c84a01b1065fb63289ca00, keycount=3, bloomtype=ROW, size=5.1 K, encoding=NONE, compression=NONE, seqNum=9, earliestPutTs=1689261371924 2023-07-13 15:16:49,033 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] compactions.Compactor(207): Compacting a4b8548565d54812ae823f3bc7af5c62, keycount=21, bloomtype=ROW, size=5.7 K, encoding=NONE, compression=NONE, seqNum=73, earliestPutTs=1689261387364 2023-07-13 15:16:49,033 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] compactions.Compactor(207): Compacting b38b6008019f46ea832797c52903ef60, keycount=2, bloomtype=ROW, size=5.0 K, encoding=NONE, compression=NONE, seqNum=80, earliestPutTs=1689261400237 2023-07-13 15:16:49,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=134 2023-07-13 15:16:49,038 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=134, state=SUCCESS; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,35843,1689261403676 in 191 msec 2023-07-13 15:16:49,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=129 2023-07-13 15:16:49,040 INFO [PEWorker-5] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,43473,1689261390950 after splitting done 2023-07-13 15:16:49,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, ASSIGN in 353 msec 2023-07-13 15:16:49,040 DEBUG [PEWorker-5] master.DeadServer(114): Removed jenkins-hbase4.apache.org,43473,1689261390950 from processing; numProcessing=0 2023-07-13 15:16:49,042 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,43473,1689261390950, splitWal=true, meta=false in 5.0000 sec 2023-07-13 15:16:49,052 INFO [RS:2;jenkins-hbase4:35843-shortCompactions-0] throttle.PressureAwareThroughputController(145): 111352044b1bd403da18db964c499c82#m#compaction#17 average throughput is 0.23 MB/second, slept 0 time(s) and total slept time is 0 ms. 0 active operations remaining, total limit is 50.00 MB/second 2023-07-13 15:16:49,084 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/ee4cbb19ba2c487fbc3e9ddc06050cf7 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/ee4cbb19ba2c487fbc3e9ddc06050cf7 2023-07-13 15:16:49,092 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 15:16:49,093 INFO [RS:2;jenkins-hbase4:35843-shortCompactions-0] regionserver.HStore(1652): Completed compaction of 3 (all) file(s) in 111352044b1bd403da18db964c499c82/m of 111352044b1bd403da18db964c499c82 into ee4cbb19ba2c487fbc3e9ddc06050cf7(size=5.1 K), total size for store is 5.1 K. This selection was in queue for 0sec, and took 0sec to execute. 2023-07-13 15:16:49,093 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] regionserver.HRegion(2289): Compaction status journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:49,093 INFO [RS:2;jenkins-hbase4:35843-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(627): Completed compaction region=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., storeName=111352044b1bd403da18db964c499c82/m, priority=13, startTime=1689261409029; duration=0sec 2023-07-13 15:16:49,093 DEBUG [RS:2;jenkins-hbase4:35843-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(659): Status compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 2023-07-13 15:16:49,669 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/namespace 2023-07-13 15:16:49,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:49,682 INFO [RS-EventLoopGroup-16-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44302, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:49,696 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 15:16:49,699 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 15:16:49,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 5.847sec 2023-07-13 15:16:49,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-13 15:16:49,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 15:16:49,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 15:16:49,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46327,1689261403181-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 15:16:49,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46327,1689261403181-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 15:16:49,705 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 15:16:49,743 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(139): Connect 0x79d8c00f to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:49,750 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e5b25a7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:49,753 DEBUG [hconnection-0x484a84dc-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:49,756 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47392, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:49,762 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(1262): HBase has been restarted 2023-07-13 15:16:49,763 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x79d8c00f to 127.0.0.1:56695 2023-07-13 15:16:49,763 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:49,765 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(2939): Invalidated connection. Updating master addresses before: jenkins-hbase4.apache.org:46327 after: jenkins-hbase4.apache.org:46327 2023-07-13 15:16:49,765 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(139): Connect 0x66512a10 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:49,773 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a212e62, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:49,774 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:50,002 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 15:16:50,264 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:quota' 2023-07-13 15:16:50,271 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 15:16:51,061 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-13 15:16:52,234 INFO [ReplicationExecutor-0] regionserver.ReplicationSourceManager$NodeFailoverWorker(712): Not transferring queue since we are shutting down 2023-07-13 15:16:52,906 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 15:16:52,907 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 15:16:52,924 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:52,924 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:52,925 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:52,926 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rsgroup 2023-07-13 15:16:52,926 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 15:16:52,977 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 15:16:52,979 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54730, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 15:16:52,982 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-13 15:16:52,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-13 15:16:52,983 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(139): Connect 0x6db61f69 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:52,992 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e691d1c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:52,993 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:52,996 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:52,997 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1015f4159470027 connected 2023-07-13 15:16:52,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:52,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:53,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:53,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:53,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:53,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:53,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:53,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:53,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:53,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:53,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:53,010 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 15:16:53,024 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:53,024 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:53,024 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:53,024 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:53,024 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:53,024 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:53,024 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:53,025 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33699 2023-07-13 15:16:53,025 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:53,030 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:53,031 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:53,032 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:53,033 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33699 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:53,041 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:336990x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:53,043 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(162): regionserver:336990x0, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:53,045 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33699-0x1015f4159470028 connected 2023-07-13 15:16:53,045 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(162): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 15:16:53,046 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:53,051 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33699 2023-07-13 15:16:53,056 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33699 2023-07-13 15:16:53,060 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33699 2023-07-13 15:16:53,061 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33699 2023-07-13 15:16:53,061 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33699 2023-07-13 15:16:53,063 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:53,064 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:53,064 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:53,064 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:53,065 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:53,065 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:53,065 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:53,066 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 41161 2023-07-13 15:16:53,066 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:53,089 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:53,089 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4a7d1dc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:53,090 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:53,090 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3d302e38{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:53,247 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:53,248 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:53,248 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:53,248 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:53,250 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:53,250 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3fff9b42{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-41161-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3589826361343760395/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:53,252 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@82b774c{HTTP/1.1, (http/1.1)}{0.0.0.0:41161} 2023-07-13 15:16:53,252 INFO [Listener at localhost/35161] server.Server(415): Started @53252ms 2023-07-13 15:16:53,255 INFO [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:53,255 DEBUG [RS:3;jenkins-hbase4:33699] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:53,257 DEBUG [RS:3;jenkins-hbase4:33699] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:53,257 DEBUG [RS:3;jenkins-hbase4:33699] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:53,258 DEBUG [RS:3;jenkins-hbase4:33699] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:53,263 DEBUG [RS:3;jenkins-hbase4:33699] zookeeper.ReadOnlyZKClient(139): Connect 0x6a7acbea to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:53,267 DEBUG [RS:3;jenkins-hbase4:33699] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64bd28d5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:53,267 DEBUG [RS:3;jenkins-hbase4:33699] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6082a09a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:53,280 DEBUG [RS:3;jenkins-hbase4:33699] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:33699 2023-07-13 15:16:53,280 INFO [RS:3;jenkins-hbase4:33699] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:53,280 INFO [RS:3;jenkins-hbase4:33699] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:53,280 DEBUG [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:53,281 INFO [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46327,1689261403181 with isa=jenkins-hbase4.apache.org/172.31.14.131:33699, startcode=1689261413023 2023-07-13 15:16:53,281 DEBUG [RS:3;jenkins-hbase4:33699] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:53,283 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52157, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.11 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:53,283 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46327] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:53,283 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:53,284 DEBUG [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:53,284 DEBUG [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:53,284 DEBUG [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46133 2023-07-13 15:16:53,287 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:53,287 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:53,287 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:53,287 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:53,287 DEBUG [RS:3;jenkins-hbase4:33699] zookeeper.ZKUtil(162): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:53,288 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:53,287 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:53,287 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:53,288 WARN [RS:3;jenkins-hbase4:33699] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:53,288 INFO [RS:3;jenkins-hbase4:33699] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:53,288 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:53,288 DEBUG [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:53,288 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:53,288 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:53,289 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33699,1689261413023] 2023-07-13 15:16:53,293 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:53,293 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:53,293 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:53,293 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 15:16:53,294 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:53,294 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:53,295 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:53,295 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:53,295 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:53,296 DEBUG [RS:3;jenkins-hbase4:33699] zookeeper.ZKUtil(162): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:53,296 DEBUG [RS:3;jenkins-hbase4:33699] zookeeper.ZKUtil(162): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:53,296 DEBUG [RS:3;jenkins-hbase4:33699] zookeeper.ZKUtil(162): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:53,297 DEBUG [RS:3;jenkins-hbase4:33699] zookeeper.ZKUtil(162): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:53,297 DEBUG [RS:3;jenkins-hbase4:33699] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:53,297 INFO [RS:3;jenkins-hbase4:33699] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:53,299 INFO [RS:3;jenkins-hbase4:33699] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:53,299 INFO [RS:3;jenkins-hbase4:33699] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:53,299 INFO [RS:3;jenkins-hbase4:33699] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:53,299 INFO [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:53,301 INFO [RS:3;jenkins-hbase4:33699] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:53,301 DEBUG [RS:3;jenkins-hbase4:33699] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:53,303 INFO [RS:3;jenkins-hbase4:33699] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:53,303 INFO [RS:3;jenkins-hbase4:33699] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:53,304 INFO [RS:3;jenkins-hbase4:33699] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:53,320 INFO [RS:3;jenkins-hbase4:33699] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:53,320 INFO [RS:3;jenkins-hbase4:33699] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33699,1689261413023-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:53,347 INFO [RS:3;jenkins-hbase4:33699] regionserver.Replication(203): jenkins-hbase4.apache.org,33699,1689261413023 started 2023-07-13 15:16:53,347 INFO [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33699,1689261413023, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33699, sessionid=0x1015f4159470028 2023-07-13 15:16:53,347 DEBUG [RS:3;jenkins-hbase4:33699] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:53,347 DEBUG [RS:3;jenkins-hbase4:33699] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:53,347 DEBUG [RS:3;jenkins-hbase4:33699] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33699,1689261413023' 2023-07-13 15:16:53,347 DEBUG [RS:3;jenkins-hbase4:33699] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:53,348 DEBUG [RS:3;jenkins-hbase4:33699] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:53,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:53,350 DEBUG [RS:3;jenkins-hbase4:33699] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:53,350 DEBUG [RS:3;jenkins-hbase4:33699] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:53,351 DEBUG [RS:3;jenkins-hbase4:33699] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:53,351 DEBUG [RS:3;jenkins-hbase4:33699] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33699,1689261413023' 2023-07-13 15:16:53,351 DEBUG [RS:3;jenkins-hbase4:33699] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:53,352 DEBUG [RS:3;jenkins-hbase4:33699] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:53,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:53,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:53,353 DEBUG [RS:3;jenkins-hbase4:33699] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:53,353 INFO [RS:3;jenkins-hbase4:33699] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:53,353 INFO [RS:3;jenkins-hbase4:33699] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:53,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:53,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:53,363 DEBUG [hconnection-0x1187a96b-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:53,365 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47946, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:53,374 DEBUG [hconnection-0x1187a96b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:53,376 INFO [RS-EventLoopGroup-16-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50394, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:53,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:53,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:53,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46327] to rsgroup master 2023-07-13 15:16:53,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:53,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] ipc.CallRunner(144): callId: 25 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:54730 deadline: 1689262613381, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. 2023-07-13 15:16:53,382 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:53,383 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:53,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:53,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:53,384 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33699, jenkins-hbase4.apache.org:35181, jenkins-hbase4.apache.org:35843, jenkins-hbase4.apache.org:40629], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:53,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:53,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:53,453 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testRSGroupsWithHBaseQuota Thread=555 (was 526) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741896_1072, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp718245837-1704-acceptor-0@33d40691-ServerConnector@3a309945{HTTP/1.1, (http/1.1)}{0.0.0.0:46133} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2009597142_17 at /127.0.0.1:52354 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741899_1075] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2009597142_17 at /127.0.0.1:55142 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741899_1075] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x6a7acbea-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1176913407_17 at /127.0.0.1:55120 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741896_1072] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp2002158265-1740 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:46327 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (352862075) connection to localhost/127.0.0.1:36199 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp718245837-1705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe8c6183-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase4:35843-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp906665638-1808 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (352862075) connection to localhost/127.0.0.1:36199 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x1187a96b-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1298837741_17 at /127.0.0.1:52336 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741897_1073] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:40629 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33699 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp545685846-2064 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp718245837-1706 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp545685846-2069 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1b4912ea-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1961050554-1766 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x6db61f69-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1961050554-1767 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741898_1074, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2009597142_17 at /127.0.0.1:52352 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741898_1074] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x38b5f8c1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741896_1072, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:36199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp545685846-2067 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp906665638-1806 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741897_1073, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp83092728-1794 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:36199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x35dd3ab6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2002158265-1739 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2002158265-1734 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741896_1072, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp83092728-1797 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp906665638-1812 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp83092728-1801 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp718245837-1710 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1627220607_17 at /127.0.0.1:39702 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x66512a10-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741898_1074, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-1d5b5964-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1627220607_17 at /127.0.0.1:55108 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46509,1689261390391 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (352862075) connection to localhost/127.0.0.1:36199 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp718245837-1708 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.10@localhost:36199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1961050554-1771 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1298837741_17 at /127.0.0.1:39712 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741897_1073] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp718245837-1703 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1658034472_17 at /127.0.0.1:52434 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:33699Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:35181-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x313585c4-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase4:35843-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x6db61f69-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp545685846-2065 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp906665638-1809-acceptor-0@68ebde25-ServerConnector@38cd7617{HTTP/1.1, (http/1.1)}{0.0.0.0:34441} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2002158265-1738 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:33699-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp83092728-1799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x66512a10-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:2;jenkins-hbase4:35843 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:40629Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp83092728-1800 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2002158265-1736 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe8c6183-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x2e3dd715-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1658034472_17 at /127.0.0.1:39830 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp83092728-1796 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1298837741_17 at /127.0.0.1:55130 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741897_1073] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x35dd3ab6-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741899_1075, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x313585c4-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-99952961_17 at /127.0.0.1:58596 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1961050554-1770 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp545685846-2068 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1961050554-1764 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:36199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261404119 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x313585c4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2002158265-1737 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741895_1071, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x6a7acbea sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp906665638-1805 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp83092728-1798 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp545685846-2063-acceptor-0@71cf8047-ServerConnector@82b774c{HTTP/1.1, (http/1.1)}{0.0.0.0:41161} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x66512a10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046-prefix:jenkins-hbase4.apache.org,35181,1689261403500 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp545685846-2066 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp718245837-1709 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:35181Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-17-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp906665638-1807 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741898_1074, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1176913407_17 at /127.0.0.1:39706 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741896_1072] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp906665638-1810 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x7de674e9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:35843Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1627220607_17 at /127.0.0.1:52320 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741895_1071] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x38b5f8c1-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2009597142_17 at /127.0.0.1:39720 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741898_1074] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x7de674e9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2009597142_17 at /127.0.0.1:55134 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741898_1074] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741897_1073, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741899_1075, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2002158265-1741 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1176913407_17 at /127.0.0.1:52326 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741896_1072] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (352862075) connection to localhost/127.0.0.1:36199 from jenkins.hfs.11 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2009597142_17 at /127.0.0.1:39724 [Receiving block BP-899266584-172.31.14.131-1689261362039:blk_1073741899_1075] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35843 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData-prefix:jenkins-hbase4.apache.org,46327,1689261403181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:35181 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:36199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x6a7acbea-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-27dc1b1b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe8c6183-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp906665638-1811 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261404125 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:36199 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xe8c6183-metaLookup-shared--pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1658034472_17 at /127.0.0.1:52430 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1187a96b-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp83092728-1795-acceptor-0@a59d457-ServerConnector@4b5beed5{HTTP/1.1, (http/1.1)}{0.0.0.0:41099} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046-prefix:jenkins-hbase4.apache.org,35843,1689261403676 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-472e901b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x6db61f69 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp718245837-1707 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33699 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x2e3dd715 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-262e4506-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x7de674e9-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1961050554-1765-acceptor-0@3af88b82-ServerConnector@4cb46a72{HTTP/1.1, (http/1.1)}{0.0.0.0:34989} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x35dd3ab6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741899_1075, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp545685846-2062 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1396842378.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x38b5f8c1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/238498379.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56695@0x2e3dd715-SendThread(127.0.0.1:56695) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1961050554-1768 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2002158265-1735-acceptor-0@190a86ae-ServerConnector@6139a553{HTTP/1.1, (http/1.1)}{0.0.0.0:42333} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046-prefix:jenkins-hbase4.apache.org,40629,1689261403333 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1961050554-1769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35181 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-899266584-172.31.14.131-1689261362039:blk_1073741897_1073, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:40629-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:35181-shortCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046-prefix:jenkins-hbase4.apache.org,35181,1689261403500.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=877 (was 799) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=448 (was 438) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=5797 (was 4113) - AvailableMemoryMB LEAK? - 2023-07-13 15:16:53,456 INFO [RS:3;jenkins-hbase4:33699] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33699%2C1689261413023, suffix=, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33699,1689261413023, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:53,456 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=555 is superior to 500 2023-07-13 15:16:53,524 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:53,524 INFO [Listener at localhost/35161] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=552, OpenFileDescriptor=871, MaxFileDescriptor=60000, SystemLoadAverage=448, ProcessCount=172, AvailableMemoryMB=5795 2023-07-13 15:16:53,524 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:53,524 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=552 is superior to 500 2023-07-13 15:16:53,524 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(132): testClearDeadServers 2023-07-13 15:16:53,539 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:53,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:53,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:53,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:53,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:53,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:53,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:53,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:53,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:53,564 INFO [RS:3;jenkins-hbase4:33699] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33699,1689261413023/jenkins-hbase4.apache.org%2C33699%2C1689261413023.1689261413456 2023-07-13 15:16:53,570 DEBUG [RS:3;jenkins-hbase4:33699] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK], DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK]] 2023-07-13 15:16:53,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:53,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:53,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:53,584 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 15:16:53,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:53,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:53,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:53,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:53,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:53,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:53,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:53,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46327] to rsgroup master 2023-07-13 15:16:53,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:53,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] ipc.CallRunner(144): callId: 53 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:54730 deadline: 1689262613613, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. 2023-07-13 15:16:53,614 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.beforeMethod(TestRSGroupsBasics.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:53,616 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:53,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:53,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:53,617 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33699, jenkins-hbase4.apache.org:35181, jenkins-hbase4.apache.org:35843, jenkins-hbase4.apache.org:40629], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:53,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:53,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:53,619 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBasics(214): testClearDeadServers 2023-07-13 15:16:53,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:53,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:53,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testClearDeadServers_1208319808 2023-07-13 15:16:53,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:53,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1208319808 2023-07-13 15:16:53,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:53,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:53,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:53,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:53,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:53,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33699, jenkins-hbase4.apache.org:35843, jenkins-hbase4.apache.org:35181] to rsgroup Group_testClearDeadServers_1208319808 2023-07-13 15:16:53,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:53,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1208319808 2023-07-13 15:16:53,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:53,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:53,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(238): Moving server region 6f63fe7474be7b61966d8c0a666e0157, which do not belong to RSGroup Group_testClearDeadServers_1208319808 2023-07-13 15:16:53,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] procedure2.ProcedureExecutor(1029): Stored pid=139, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, REOPEN/MOVE 2023-07-13 15:16:53,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testClearDeadServers_1208319808 2023-07-13 15:16:53,644 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, REOPEN/MOVE 2023-07-13 15:16:53,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] procedure2.ProcedureExecutor(1029): Stored pid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 15:16:53,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(238): Moving server region 8f9b3c3c0c701a7e057738cfe2a31027, which do not belong to RSGroup Group_testClearDeadServers_1208319808 2023-07-13 15:16:53,645 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 15:16:53,646 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=6f63fe7474be7b61966d8c0a666e0157, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:53,646 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261413646"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261413646"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261413646"}]},"ts":"1689261413646"} 2023-07-13 15:16:53,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] procedure2.ProcedureExecutor(1029): Stored pid=141, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE 2023-07-13 15:16:53,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(238): Moving server region 111352044b1bd403da18db964c499c82, which do not belong to RSGroup Group_testClearDeadServers_1208319808 2023-07-13 15:16:53,648 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE 2023-07-13 15:16:53,652 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,35181,1689261403500, state=CLOSING 2023-07-13 15:16:53,652 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=139, state=RUNNABLE; CloseRegionProcedure 6f63fe7474be7b61966d8c0a666e0157, server=jenkins-hbase4.apache.org,35181,1689261403500}] 2023-07-13 15:16:53,653 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:53,653 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:53,653 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=140, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35181,1689261403500}] 2023-07-13 15:16:53,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] procedure2.ProcedureExecutor(1029): Stored pid=142, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE 2023-07-13 15:16:53,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(286): Moving 4 region(s) to group default, current retry=0 2023-07-13 15:16:53,655 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=142, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE 2023-07-13 15:16:53,656 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=143, ppid=139, state=RUNNABLE; CloseRegionProcedure 6f63fe7474be7b61966d8c0a666e0157, server=jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:53,656 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:53,656 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261413656"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261413656"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261413656"}]},"ts":"1689261413656"} 2023-07-13 15:16:53,658 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=141, state=RUNNABLE; CloseRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,35843,1689261403676}] 2023-07-13 15:16:53,661 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=145, ppid=141, state=RUNNABLE; CloseRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:53,809 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-13 15:16:53,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:53,809 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:53,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:53,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:53,809 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:53,810 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.46 KB heapSize=6.42 KB 2023-07-13 15:16:53,839 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.46 KB at sequenceid=179 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/f8a6c1cf2a3d4188b180a910003c45b7 2023-07-13 15:16:53,847 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/f8a6c1cf2a3d4188b180a910003c45b7 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/f8a6c1cf2a3d4188b180a910003c45b7 2023-07-13 15:16:53,856 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/f8a6c1cf2a3d4188b180a910003c45b7, entries=30, sequenceid=179, filesize=8.2 K 2023-07-13 15:16:53,857 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.46 KB/3546, heapSize ~5.91 KB/6048, currentSize=0 B/0 for 1588230740 in 47ms, sequenceid=179, compaction requested=false 2023-07-13 15:16:53,879 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/6c4d1a30fa324b9292b3c505317b9f7f, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/3e2c8a0d327b43a89086a648a1aed48b, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/e331f406b6624624a6f0dd5ce8e3b5ca] to archive 2023-07-13 15:16:53,880 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-13 15:16:53,884 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/6c4d1a30fa324b9292b3c505317b9f7f to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/hbase/meta/1588230740/info/6c4d1a30fa324b9292b3c505317b9f7f 2023-07-13 15:16:53,887 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/3e2c8a0d327b43a89086a648a1aed48b to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/hbase/meta/1588230740/info/3e2c8a0d327b43a89086a648a1aed48b 2023-07-13 15:16:53,894 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/e331f406b6624624a6f0dd5ce8e3b5ca to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/hbase/meta/1588230740/info/e331f406b6624624a6f0dd5ce8e3b5ca 2023-07-13 15:16:53,958 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/3f9c4b698d1c4d0292338c1574eb859a, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/d67a1478638944cbab7e2c10f09f1d65, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/fac7b86466ef4efabec576fae39af302] to archive 2023-07-13 15:16:53,959 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-13 15:16:53,961 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/3f9c4b698d1c4d0292338c1574eb859a to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/hbase/meta/1588230740/table/3f9c4b698d1c4d0292338c1574eb859a 2023-07-13 15:16:53,963 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/d67a1478638944cbab7e2c10f09f1d65 to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/hbase/meta/1588230740/table/d67a1478638944cbab7e2c10f09f1d65 2023-07-13 15:16:53,965 DEBUG [StoreCloser-hbase:meta,,1.1588230740-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/fac7b86466ef4efabec576fae39af302 to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/hbase/meta/1588230740/table/fac7b86466ef4efabec576fae39af302 2023-07-13 15:16:53,974 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/recovered.edits/182.seqid, newMaxSeqId=182, maxSeqId=166 2023-07-13 15:16:53,975 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:53,976 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:53,976 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:53,976 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,40629,1689261403333 record at close sequenceid=179 2023-07-13 15:16:53,982 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-13 15:16:53,983 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-13 15:16:53,987 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=140 2023-07-13 15:16:53,987 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,35181,1689261403500 in 330 msec 2023-07-13 15:16:53,988 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=140, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40629,1689261403333; forceNewPlan=false, retain=false 2023-07-13 15:16:54,139 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40629,1689261403333, state=OPENING 2023-07-13 15:16:54,140 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:54,140 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=140, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40629,1689261403333}] 2023-07-13 15:16:54,140 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:54,293 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:54,294 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:54,297 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58336, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:54,304 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 15:16:54,304 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:54,307 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40629%2C1689261403333.meta, suffix=.meta, logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,40629,1689261403333, archiveDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs, maxLogs=32 2023-07-13 15:16:54,344 DEBUG [RS-EventLoopGroup-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK] 2023-07-13 15:16:54,344 DEBUG [RS-EventLoopGroup-17-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK] 2023-07-13 15:16:54,344 DEBUG [RS-EventLoopGroup-17-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK] 2023-07-13 15:16:54,352 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,40629,1689261403333/jenkins-hbase4.apache.org%2C40629%2C1689261403333.meta.1689261414308.meta 2023-07-13 15:16:54,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38723,DS-9c83551c-8838-49a7-8254-8997fa3f68f2,DISK], DatanodeInfoWithStorage[127.0.0.1:33357,DS-b49739d6-b46c-4b6e-a2b8-71840a57307d,DISK], DatanodeInfoWithStorage[127.0.0.1:37767,DS-446258f2-9f59-4682-ba62-dd8f3f96d844,DISK]] 2023-07-13 15:16:54,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:54,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:54,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 15:16:54,353 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 15:16:54,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 15:16:54,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:54,354 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 15:16:54,354 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 15:16:54,355 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 15:16:54,356 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:54,356 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info 2023-07-13 15:16:54,356 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 15:16:54,374 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/98204a5af292435fbf3f542503f1d20a 2023-07-13 15:16:54,380 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/f8a6c1cf2a3d4188b180a910003c45b7 2023-07-13 15:16:54,380 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:54,381 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 15:16:54,382 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:54,382 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier 2023-07-13 15:16:54,382 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 15:16:54,402 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:54,402 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/e29818aea41d432187d74bea6ea06843 2023-07-13 15:16:54,407 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:54,407 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/rep_barrier/eccc45434a07413990578d9b62a2e144 2023-07-13 15:16:54,408 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:54,408 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 15:16:54,409 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:54,409 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table 2023-07-13 15:16:54,409 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 15:16:54,415 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/table/f2795f71130c4d9983d361b9601ad937 2023-07-13 15:16:54,415 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:54,416 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:54,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740 2023-07-13 15:16:54,419 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 15:16:54,421 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 15:16:54,422 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=183; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10327199200, jitterRate=-0.038204625248909}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 15:16:54,422 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 15:16:54,423 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=146, masterSystemTime=1689261414293 2023-07-13 15:16:54,428 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 15:16:54,429 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 15:16:54,429 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40629,1689261403333, state=OPEN 2023-07-13 15:16:54,431 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 15:16:54,431 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 15:16:54,432 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE 2023-07-13 15:16:54,433 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:54,433 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261414433"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261414433"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261414433"}]},"ts":"1689261414433"} 2023-07-13 15:16:54,433 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35181] ipc.CallRunner(144): callId: 63 service: ClientService methodName: Mutate size: 275 connection: 172.31.14.131:47362 deadline: 1689261474433, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=40629 startCode=1689261403333. As of locationSeqNum=179. 2023-07-13 15:16:54,436 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=140 2023-07-13 15:16:54,436 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=140, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40629,1689261403333 in 291 msec 2023-07-13 15:16:54,437 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=140, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 792 msec 2023-07-13 15:16:54,535 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:54,537 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:54,539 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=142, state=RUNNABLE; CloseRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,35843,1689261403676}] 2023-07-13 15:16:54,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,584 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 111352044b1bd403da18db964c499c82 2023-07-13 15:16:54,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6f63fe7474be7b61966d8c0a666e0157, disabling compactions & flushes 2023-07-13 15:16:54,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 111352044b1bd403da18db964c499c82, disabling compactions & flushes 2023-07-13 15:16:54,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:54,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:54,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:54,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:54,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. after waiting 0 ms 2023-07-13 15:16:54,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. after waiting 0 ms 2023-07-13 15:16:54,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:54,586 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:54,586 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 111352044b1bd403da18db964c499c82 1/1 column families, dataSize=2.22 KB heapSize=3.72 KB 2023-07-13 15:16:54,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 15:16:54,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:54,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6f63fe7474be7b61966d8c0a666e0157: 2023-07-13 15:16:54,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6f63fe7474be7b61966d8c0a666e0157 move to jenkins-hbase4.apache.org,40629,1689261403333 record at close sequenceid=5 2023-07-13 15:16:54,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,595 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=6f63fe7474be7b61966d8c0a666e0157, regionState=CLOSED 2023-07-13 15:16:54,596 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261414595"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261414595"}]},"ts":"1689261414595"} 2023-07-13 15:16:54,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=139 2023-07-13 15:16:54,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=139, state=SUCCESS; CloseRegionProcedure 6f63fe7474be7b61966d8c0a666e0157, server=jenkins-hbase4.apache.org,35181,1689261403500 in 949 msec 2023-07-13 15:16:54,600 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=139, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40629,1689261403333; forceNewPlan=false, retain=false 2023-07-13 15:16:54,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.22 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/4e549ceeac8c4a2cb547a3064010b6d2 2023-07-13 15:16:54,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4e549ceeac8c4a2cb547a3064010b6d2 2023-07-13 15:16:54,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/4e549ceeac8c4a2cb547a3064010b6d2 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/4e549ceeac8c4a2cb547a3064010b6d2 2023-07-13 15:16:54,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4e549ceeac8c4a2cb547a3064010b6d2 2023-07-13 15:16:54,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/4e549ceeac8c4a2cb547a3064010b6d2, entries=5, sequenceid=95, filesize=5.3 K 2023-07-13 15:16:54,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.22 KB/2276, heapSize ~3.70 KB/3792, currentSize=0 B/0 for 111352044b1bd403da18db964c499c82 in 32ms, sequenceid=95, compaction requested=false 2023-07-13 15:16:54,624 DEBUG [StoreCloser-hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.-1] regionserver.HStore(2712): Moving the files [hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/9cc85fdd31c84a01b1065fb63289ca00, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/a4b8548565d54812ae823f3bc7af5c62, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/b38b6008019f46ea832797c52903ef60] to archive 2023-07-13 15:16:54,625 DEBUG [StoreCloser-hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.-1] backup.HFileArchiver(360): Archiving compacted files. 2023-07-13 15:16:54,626 DEBUG [StoreCloser-hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/9cc85fdd31c84a01b1065fb63289ca00 to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/9cc85fdd31c84a01b1065fb63289ca00 2023-07-13 15:16:54,627 DEBUG [StoreCloser-hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/a4b8548565d54812ae823f3bc7af5c62 to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/a4b8548565d54812ae823f3bc7af5c62 2023-07-13 15:16:54,629 DEBUG [StoreCloser-hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.-1] backup.HFileArchiver(582): Archived from FileableStoreFile, hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/b38b6008019f46ea832797c52903ef60 to hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/archive/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/b38b6008019f46ea832797c52903ef60 2023-07-13 15:16:54,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=83 2023-07-13 15:16:54,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:54,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:54,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:54,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 111352044b1bd403da18db964c499c82 move to jenkins-hbase4.apache.org,40629,1689261403333 record at close sequenceid=95 2023-07-13 15:16:54,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 111352044b1bd403da18db964c499c82 2023-07-13 15:16:54,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:54,638 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=CLOSED 2023-07-13 15:16:54,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8f9b3c3c0c701a7e057738cfe2a31027, disabling compactions & flushes 2023-07-13 15:16:54,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:54,639 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261414638"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261414638"}]},"ts":"1689261414638"} 2023-07-13 15:16:54,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:54,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. after waiting 0 ms 2023-07-13 15:16:54,640 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:54,646 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=142 2023-07-13 15:16:54,646 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=142, state=SUCCESS; CloseRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,35843,1689261403676 in 102 msec 2023-07-13 15:16:54,646 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=142, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40629,1689261403333; forceNewPlan=false, retain=false 2023-07-13 15:16:54,647 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=6f63fe7474be7b61966d8c0a666e0157, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:54,647 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261414647"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261414647"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261414647"}]},"ts":"1689261414647"} 2023-07-13 15:16:54,654 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:54,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] procedure.ProcedureSyncWait(216): waitFor pid=139 2023-07-13 15:16:54,658 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261414650"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261414650"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261414650"}]},"ts":"1689261414650"} 2023-07-13 15:16:54,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=139, state=RUNNABLE; OpenRegionProcedure 6f63fe7474be7b61966d8c0a666e0157, server=jenkins-hbase4.apache.org,40629,1689261403333}] 2023-07-13 15:16:54,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=142, state=RUNNABLE; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,40629,1689261403333}] 2023-07-13 15:16:54,670 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/recovered.edits/26.seqid, newMaxSeqId=26, maxSeqId=23 2023-07-13 15:16:54,679 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:54,679 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:54,679 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8f9b3c3c0c701a7e057738cfe2a31027 move to jenkins-hbase4.apache.org,40629,1689261403333 record at close sequenceid=24 2023-07-13 15:16:54,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:54,681 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=CLOSED 2023-07-13 15:16:54,681 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261414681"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689261414681"}]},"ts":"1689261414681"} 2023-07-13 15:16:54,684 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=141 2023-07-13 15:16:54,684 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=141, state=SUCCESS; CloseRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,35843,1689261403676 in 1.0240 sec 2023-07-13 15:16:54,684 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=141, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40629,1689261403333; forceNewPlan=false, retain=false 2023-07-13 15:16:54,815 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:54,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 111352044b1bd403da18db964c499c82, NAME => 'hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:54,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 15:16:54,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. service=MultiRowMutationService 2023-07-13 15:16:54,815 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 15:16:54,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 111352044b1bd403da18db964c499c82 2023-07-13 15:16:54,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:54,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:54,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:54,817 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 111352044b1bd403da18db964c499c82 2023-07-13 15:16:54,818 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:54,818 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m 2023-07-13 15:16:54,818 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 111352044b1bd403da18db964c499c82 columnFamilyName m 2023-07-13 15:16:54,828 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4e549ceeac8c4a2cb547a3064010b6d2 2023-07-13 15:16:54,828 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/4e549ceeac8c4a2cb547a3064010b6d2 2023-07-13 15:16:54,833 DEBUG [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/ee4cbb19ba2c487fbc3e9ddc06050cf7 2023-07-13 15:16:54,833 INFO [StoreOpener-111352044b1bd403da18db964c499c82-1] regionserver.HStore(310): Store=111352044b1bd403da18db964c499c82/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:54,834 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:54,835 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:54,835 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261414835"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689261414835"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689261414835"}]},"ts":"1689261414835"} 2023-07-13 15:16:54,836 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=141, state=RUNNABLE; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,40629,1689261403333}] 2023-07-13 15:16:54,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82 2023-07-13 15:16:54,841 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:54,842 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 111352044b1bd403da18db964c499c82; next sequenceid=99; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@174fdeda, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:54,843 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:54,843 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., pid=149, masterSystemTime=1689261414811 2023-07-13 15:16:54,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:54,845 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:54,845 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:54,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6f63fe7474be7b61966d8c0a666e0157, NAME => 'hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:54,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:54,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,845 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,846 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=111352044b1bd403da18db964c499c82, regionState=OPEN, openSeqNum=99, regionLocation=jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:54,846 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689261414846"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261414846"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261414846"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261414846"}]},"ts":"1689261414846"} 2023-07-13 15:16:54,847 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,848 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/q 2023-07-13 15:16:54,848 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/q 2023-07-13 15:16:54,849 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6f63fe7474be7b61966d8c0a666e0157 columnFamilyName q 2023-07-13 15:16:54,850 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(310): Store=6f63fe7474be7b61966d8c0a666e0157/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:54,850 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,851 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=142 2023-07-13 15:16:54,851 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/u 2023-07-13 15:16:54,851 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=142, state=SUCCESS; OpenRegionProcedure 111352044b1bd403da18db964c499c82, server=jenkins-hbase4.apache.org,40629,1689261403333 in 188 msec 2023-07-13 15:16:54,851 DEBUG [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/u 2023-07-13 15:16:54,852 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6f63fe7474be7b61966d8c0a666e0157 columnFamilyName u 2023-07-13 15:16:54,852 INFO [StoreOpener-6f63fe7474be7b61966d8c0a666e0157-1] regionserver.HStore(310): Store=6f63fe7474be7b61966d8c0a666e0157/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:54,853 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,854 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=111352044b1bd403da18db964c499c82, REOPEN/MOVE in 1.2030 sec 2023-07-13 15:16:54,859 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-13 15:16:54,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:54,863 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6f63fe7474be7b61966d8c0a666e0157; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10477401120, jitterRate=-0.024215981364250183}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-13 15:16:54,863 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6f63fe7474be7b61966d8c0a666e0157: 2023-07-13 15:16:54,864 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157., pid=148, masterSystemTime=1689261414811 2023-07-13 15:16:54,865 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:54,866 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:54,866 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=6f63fe7474be7b61966d8c0a666e0157, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:54,866 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689261414866"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261414866"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261414866"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261414866"}]},"ts":"1689261414866"} 2023-07-13 15:16:54,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=139 2023-07-13 15:16:54,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=139, state=SUCCESS; OpenRegionProcedure 6f63fe7474be7b61966d8c0a666e0157, server=jenkins-hbase4.apache.org,40629,1689261403333 in 208 msec 2023-07-13 15:16:54,872 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=139, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=6f63fe7474be7b61966d8c0a666e0157, REOPEN/MOVE in 1.2280 sec 2023-07-13 15:16:54,995 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:54,995 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f9b3c3c0c701a7e057738cfe2a31027, NAME => 'hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.', STARTKEY => '', ENDKEY => ''} 2023-07-13 15:16:54,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:54,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 15:16:54,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:54,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:54,997 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:54,998 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:54,998 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info 2023-07-13 15:16:54,998 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f9b3c3c0c701a7e057738cfe2a31027 columnFamilyName info 2023-07-13 15:16:55,004 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:55,004 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/0c14a3c301924edd9435fdf2dd29da5a 2023-07-13 15:16:55,009 DEBUG [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(539): loaded hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/info/8877d21c56c24ede9d59119e77b5fd77 2023-07-13 15:16:55,009 INFO [StoreOpener-8f9b3c3c0c701a7e057738cfe2a31027-1] regionserver.HStore(310): Store=8f9b3c3c0c701a7e057738cfe2a31027/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 15:16:55,010 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:55,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:55,014 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:55,015 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8f9b3c3c0c701a7e057738cfe2a31027; next sequenceid=27; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11594266720, jitterRate=0.07980023324489594}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 15:16:55,015 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:55,015 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027., pid=150, masterSystemTime=1689261414992 2023-07-13 15:16:55,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:55,017 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:55,017 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=8f9b3c3c0c701a7e057738cfe2a31027, regionState=OPEN, openSeqNum=27, regionLocation=jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:55,018 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689261415017"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689261415017"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689261415017"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689261415017"}]},"ts":"1689261415017"} 2023-07-13 15:16:55,021 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=141 2023-07-13 15:16:55,021 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=141, state=SUCCESS; OpenRegionProcedure 8f9b3c3c0c701a7e057738cfe2a31027, server=jenkins-hbase4.apache.org,40629,1689261403333 in 183 msec 2023-07-13 15:16:55,024 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8f9b3c3c0c701a7e057738cfe2a31027, REOPEN/MOVE in 1.3750 sec 2023-07-13 15:16:55,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33699,1689261413023, jenkins-hbase4.apache.org,35181,1689261403500, jenkins-hbase4.apache.org,35843,1689261403676] are moved back to default 2023-07-13 15:16:55,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testClearDeadServers_1208319808 2023-07-13 15:16:55,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:55,659 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35843] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:50394 deadline: 1689261475659, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=40629 startCode=1689261403333. As of locationSeqNum=95. 2023-07-13 15:16:55,762 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35181] ipc.CallRunner(144): callId: 5 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:47946 deadline: 1689261475762, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=40629 startCode=1689261403333. As of locationSeqNum=179. 2023-07-13 15:16:55,865 DEBUG [hconnection-0x1187a96b-shared-pool-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 15:16:55,866 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58348, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 15:16:55,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:55,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:55,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1208319808 2023-07-13 15:16:55,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:55,878 DEBUG [Listener at localhost/35161] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 15:16:55,879 INFO [RS-EventLoopGroup-17-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41394, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 15:16:55,879 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33699] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33699,1689261413023' ***** 2023-07-13 15:16:55,879 INFO [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33699] regionserver.HRegionServer(2311): STOPPED: Called by admin client hconnection-0x5495dac4 2023-07-13 15:16:55,879 INFO [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:55,882 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:55,928 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:55,928 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:56,062 INFO [RS:3;jenkins-hbase4:33699] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3fff9b42{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:56,062 INFO [RS:3;jenkins-hbase4:33699] server.AbstractConnector(383): Stopped ServerConnector@82b774c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:56,062 INFO [RS:3;jenkins-hbase4:33699] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:56,063 INFO [RS:3;jenkins-hbase4:33699] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3d302e38{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:56,064 INFO [RS:3;jenkins-hbase4:33699] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4a7d1dc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:56,065 INFO [RS:3;jenkins-hbase4:33699] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:56,065 INFO [RS:3;jenkins-hbase4:33699] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:56,065 INFO [RS:3;jenkins-hbase4:33699] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:56,065 INFO [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:56,065 DEBUG [RS:3;jenkins-hbase4:33699] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6a7acbea to 127.0.0.1:56695 2023-07-13 15:16:56,065 DEBUG [RS:3;jenkins-hbase4:33699] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:56,065 INFO [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33699,1689261413023; all regions closed. 2023-07-13 15:16:56,073 DEBUG [RS:3;jenkins-hbase4:33699] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:56,074 INFO [RS:3;jenkins-hbase4:33699] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33699%2C1689261413023:(num 1689261413456) 2023-07-13 15:16:56,074 DEBUG [RS:3;jenkins-hbase4:33699] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:56,074 INFO [RS:3;jenkins-hbase4:33699] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:56,074 INFO [RS:3;jenkins-hbase4:33699] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:56,074 INFO [RS:3;jenkins-hbase4:33699] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:56,074 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:56,074 INFO [RS:3;jenkins-hbase4:33699] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:56,074 INFO [RS:3;jenkins-hbase4:33699] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:56,075 INFO [RS:3;jenkins-hbase4:33699] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33699 2023-07-13 15:16:56,298 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:56,298 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:56,298 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:56,298 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:56,298 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:56,298 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:56,298 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 2023-07-13 15:16:56,299 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:56,298 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:56,369 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33699,1689261413023] 2023-07-13 15:16:56,369 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33699,1689261413023; numProcessing=1 2023-07-13 15:16:56,369 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:56,369 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:56,369 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:56,414 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33699,1689261413023 already deleted, retry=false 2023-07-13 15:16:56,414 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:56,414 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:56,414 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:56,414 INFO [RegionServerTracker-0] master.ServerManager(568): Processing expiration of jenkins-hbase4.apache.org,33699,1689261413023 on jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:56,414 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:56,414 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:56,414 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:56,414 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 znode expired, triggering replicatorRemoved event 2023-07-13 15:16:56,415 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 znode expired, triggering replicatorRemoved event 2023-07-13 15:16:56,414 INFO [zk-event-processor-pool-0] replication.ReplicationTrackerZKImpl$OtherRegionServerWatcher(124): /hbase/rs/jenkins-hbase4.apache.org,33699,1689261413023 znode expired, triggering replicatorRemoved event 2023-07-13 15:16:56,415 DEBUG [RegionServerTracker-0] procedure2.ProcedureExecutor(1029): Stored pid=151, state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure jenkins-hbase4.apache.org,33699,1689261413023, splitWal=true, meta=false 2023-07-13 15:16:56,415 INFO [RegionServerTracker-0] assignment.AssignmentManager(1734): Scheduled ServerCrashProcedure pid=151 for jenkins-hbase4.apache.org,33699,1689261413023 (carryingMeta=false) jenkins-hbase4.apache.org,33699,1689261413023/CRASHED/regionCount=0/lock=java.util.concurrent.locks.ReentrantReadWriteLock@2c3c5f9d[Write locks = 1, Read locks = 0], oldState=ONLINE. 2023-07-13 15:16:56,415 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:56,415 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:56,417 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:56,417 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:56,417 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:56,417 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35843] ipc.CallRunner(144): callId: 78 service: ClientService methodName: ExecService size: 574 connection: 172.31.14.131:44302 deadline: 1689261476417, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=40629 startCode=1689261403333. As of locationSeqNum=95. 2023-07-13 15:16:56,417 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:56,417 INFO [PEWorker-3] procedure.ServerCrashProcedure(161): Start pid=151, state=RUNNABLE:SERVER_CRASH_START, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,33699,1689261413023, splitWal=true, meta=false 2023-07-13 15:16:56,417 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:56,417 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:56,418 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:56,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:56,419 INFO [PEWorker-3] procedure.ServerCrashProcedure(199): jenkins-hbase4.apache.org,33699,1689261413023 had 0 regions 2023-07-13 15:16:56,420 INFO [PEWorker-3] procedure.ServerCrashProcedure(300): Splitting WALs pid=151, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure jenkins-hbase4.apache.org,33699,1689261413023, splitWal=true, meta=false, isMeta: false 2023-07-13 15:16:56,422 DEBUG [PEWorker-3] master.MasterWalManager(318): Renamed region directory: hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33699,1689261413023-splitting 2023-07-13 15:16:56,422 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33699,1689261413023-splitting dir is empty, no logs to split. 2023-07-13 15:16:56,422 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,33699,1689261413023 WAL count=0, meta=false 2023-07-13 15:16:56,424 INFO [PEWorker-3] master.SplitLogManager(171): hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33699,1689261413023-splitting dir is empty, no logs to split. 2023-07-13 15:16:56,424 INFO [PEWorker-3] master.SplitWALManager(106): jenkins-hbase4.apache.org,33699,1689261413023 WAL count=0, meta=false 2023-07-13 15:16:56,424 DEBUG [PEWorker-3] procedure.ServerCrashProcedure(290): Check if jenkins-hbase4.apache.org,33699,1689261413023 WAL splitting is done? wals=0, meta=false 2023-07-13 15:16:56,426 INFO [PEWorker-3] procedure.ServerCrashProcedure(282): Remove WAL directory for jenkins-hbase4.apache.org,33699,1689261413023 failed, ignore...File hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,33699,1689261413023-splitting does not exist. 2023-07-13 15:16:56,427 INFO [PEWorker-3] procedure.ServerCrashProcedure(251): removed crashed server jenkins-hbase4.apache.org,33699,1689261413023 after splitting done 2023-07-13 15:16:56,427 DEBUG [PEWorker-3] master.DeadServer(114): Removed jenkins-hbase4.apache.org,33699,1689261413023 from processing; numProcessing=0 2023-07-13 15:16:56,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=151, state=SUCCESS; ServerCrashProcedure jenkins-hbase4.apache.org,33699,1689261413023, splitWal=true, meta=false in 13 msec 2023-07-13 15:16:56,469 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:56,469 INFO [RS:3;jenkins-hbase4:33699] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33699,1689261413023; zookeeper connection closed. 2023-07-13 15:16:56,469 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:33699-0x1015f4159470028, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:56,469 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1e316063] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1e316063 2023-07-13 15:16:56,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(2362): Client=jenkins//172.31.14.131 clear dead region servers. 2023-07-13 15:16:56,522 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:56,522 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1208319808 2023-07-13 15:16:56,522 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:56,523 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:56,583 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 15:16:56,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:56,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1208319808 2023-07-13 15:16:56,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:56,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:56,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(609): Remove decommissioned servers [jenkins-hbase4.apache.org:33699] from RSGroup done 2023-07-13 15:16:56,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testClearDeadServers_1208319808 2023-07-13 15:16:56,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:56,786 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35843] ipc.CallRunner(144): callId: 82 service: ClientService methodName: Scan size: 146 connection: 172.31.14.131:44302 deadline: 1689261476786, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=40629 startCode=1689261403333. As of locationSeqNum=24. 2023-07-13 15:16:56,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:56,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:56,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:56,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:56,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:56,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35843, jenkins-hbase4.apache.org:35181] to rsgroup default 2023-07-13 15:16:56,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:56,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testClearDeadServers_1208319808 2023-07-13 15:16:56,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:56,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 15:16:57,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testClearDeadServers_1208319808, current retry=0 2023-07-13 15:16:57,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35181,1689261403500, jenkins-hbase4.apache.org,35843,1689261403676] are moved back to Group_testClearDeadServers_1208319808 2023-07-13 15:16:57,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testClearDeadServers_1208319808 => default 2023-07-13 15:16:57,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:57,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testClearDeadServers_1208319808 2023-07-13 15:16:57,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:57,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:57,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 15:16:57,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:57,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-13 15:16:57,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 15:16:57,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-13 15:16:57,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-13 15:16:57,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-13 15:16:57,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-13 15:16:57,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:57,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 15:16:57,229 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 15:16:57,232 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 15:16:57,244 INFO [Listener at localhost/35161] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-13 15:16:57,245 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:57,245 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:57,245 INFO [Listener at localhost/35161] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 15:16:57,245 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 15:16:57,245 INFO [Listener at localhost/35161] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 15:16:57,245 INFO [Listener at localhost/35161] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 15:16:57,246 INFO [Listener at localhost/35161] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44647 2023-07-13 15:16:57,246 INFO [Listener at localhost/35161] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 15:16:57,248 DEBUG [Listener at localhost/35161] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 15:16:57,248 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:57,249 INFO [Listener at localhost/35161] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 15:16:57,251 INFO [Listener at localhost/35161] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44647 connecting to ZooKeeper ensemble=127.0.0.1:56695 2023-07-13 15:16:57,283 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:446470x0, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 15:16:57,285 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(162): regionserver:446470x0, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 15:16:57,285 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44647-0x1015f415947002a connected 2023-07-13 15:16:57,286 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(162): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 15:16:57,287 DEBUG [Listener at localhost/35161] zookeeper.ZKUtil(164): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 15:16:57,287 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44647 2023-07-13 15:16:57,287 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44647 2023-07-13 15:16:57,288 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44647 2023-07-13 15:16:57,288 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44647 2023-07-13 15:16:57,288 DEBUG [Listener at localhost/35161] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44647 2023-07-13 15:16:57,290 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 15:16:57,290 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 15:16:57,290 INFO [Listener at localhost/35161] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 15:16:57,291 INFO [Listener at localhost/35161] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 15:16:57,291 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 15:16:57,291 INFO [Listener at localhost/35161] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 15:16:57,291 INFO [Listener at localhost/35161] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 15:16:57,292 INFO [Listener at localhost/35161] http.HttpServer(1146): Jetty bound to port 34081 2023-07-13 15:16:57,292 INFO [Listener at localhost/35161] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 15:16:57,293 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:57,293 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1916b0e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,AVAILABLE} 2023-07-13 15:16:57,293 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:57,293 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3b0e3a36{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 15:16:57,408 INFO [Listener at localhost/35161] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 15:16:57,410 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 15:16:57,410 INFO [Listener at localhost/35161] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 15:16:57,410 INFO [Listener at localhost/35161] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 15:16:57,411 INFO [Listener at localhost/35161] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 15:16:57,412 INFO [Listener at localhost/35161] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@601de5f1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/java.io.tmpdir/jetty-0_0_0_0-34081-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9046462513958922573/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:57,414 INFO [Listener at localhost/35161] server.AbstractConnector(333): Started ServerConnector@4ab1f51d{HTTP/1.1, (http/1.1)}{0.0.0.0:34081} 2023-07-13 15:16:57,414 INFO [Listener at localhost/35161] server.Server(415): Started @57414ms 2023-07-13 15:16:57,417 INFO [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(951): ClusterId : d31061b0-c369-4308-9b4a-b2efd227e0b4 2023-07-13 15:16:57,417 DEBUG [RS:4;jenkins-hbase4:44647] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 15:16:57,422 DEBUG [RS:4;jenkins-hbase4:44647] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 15:16:57,422 DEBUG [RS:4;jenkins-hbase4:44647] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 15:16:57,437 DEBUG [RS:4;jenkins-hbase4:44647] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 15:16:57,438 DEBUG [RS:4;jenkins-hbase4:44647] zookeeper.ReadOnlyZKClient(139): Connect 0x1183eb58 to 127.0.0.1:56695 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 15:16:57,460 DEBUG [RS:4;jenkins-hbase4:44647] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f441718, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 15:16:57,460 DEBUG [RS:4;jenkins-hbase4:44647] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b7b8028, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:57,469 DEBUG [RS:4;jenkins-hbase4:44647] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:4;jenkins-hbase4:44647 2023-07-13 15:16:57,469 INFO [RS:4;jenkins-hbase4:44647] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 15:16:57,469 INFO [RS:4;jenkins-hbase4:44647] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 15:16:57,469 DEBUG [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 15:16:57,469 INFO [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46327,1689261403181 with isa=jenkins-hbase4.apache.org/172.31.14.131:44647, startcode=1689261417244 2023-07-13 15:16:57,469 DEBUG [RS:4;jenkins-hbase4:44647] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 15:16:57,471 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50139, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.12 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 15:16:57,471 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46327] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,471 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 15:16:57,472 DEBUG [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046 2023-07-13 15:16:57,472 DEBUG [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36199 2023-07-13 15:16:57,472 DEBUG [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46133 2023-07-13 15:16:57,491 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,492 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,492 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:57,492 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,492 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,492 DEBUG [RS:4;jenkins-hbase4:44647] zookeeper.ZKUtil(162): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,492 WARN [RS:4;jenkins-hbase4:44647] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 15:16:57,492 INFO [RS:4;jenkins-hbase4:44647] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 15:16:57,492 DEBUG [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/WALs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,492 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 15:16:57,492 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44647,1689261417244] 2023-07-13 15:16:57,492 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:57,492 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:57,492 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:57,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:57,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:57,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:57,506 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46327,1689261403181] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 15:16:57,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:57,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:57,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:57,507 DEBUG [RS:4;jenkins-hbase4:44647] zookeeper.ZKUtil(162): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:57,507 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,507 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,507 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,507 DEBUG [RS:4;jenkins-hbase4:44647] zookeeper.ZKUtil(162): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:57,507 DEBUG [RS:4;jenkins-hbase4:44647] zookeeper.ZKUtil(162): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:57,508 DEBUG [RS:4;jenkins-hbase4:44647] zookeeper.ZKUtil(162): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,508 DEBUG [RS:4;jenkins-hbase4:44647] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 15:16:57,508 INFO [RS:4;jenkins-hbase4:44647] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 15:16:57,510 INFO [RS:4;jenkins-hbase4:44647] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 15:16:57,510 INFO [RS:4;jenkins-hbase4:44647] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 15:16:57,510 INFO [RS:4;jenkins-hbase4:44647] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:57,510 INFO [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 15:16:57,512 INFO [RS:4;jenkins-hbase4:44647] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:57,512 DEBUG [RS:4;jenkins-hbase4:44647] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-13 15:16:57,515 INFO [RS:4;jenkins-hbase4:44647] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:57,515 INFO [RS:4;jenkins-hbase4:44647] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:57,515 INFO [RS:4;jenkins-hbase4:44647] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:57,530 INFO [RS:4;jenkins-hbase4:44647] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 15:16:57,530 INFO [RS:4;jenkins-hbase4:44647] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44647,1689261417244-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 15:16:57,541 INFO [RS:4;jenkins-hbase4:44647] regionserver.Replication(203): jenkins-hbase4.apache.org,44647,1689261417244 started 2023-07-13 15:16:57,541 INFO [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44647,1689261417244, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44647, sessionid=0x1015f415947002a 2023-07-13 15:16:57,541 DEBUG [RS:4;jenkins-hbase4:44647] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 15:16:57,541 DEBUG [RS:4;jenkins-hbase4:44647] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,541 DEBUG [RS:4;jenkins-hbase4:44647] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44647,1689261417244' 2023-07-13 15:16:57,541 DEBUG [RS:4;jenkins-hbase4:44647] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 15:16:57,541 DEBUG [RS:4;jenkins-hbase4:44647] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 15:16:57,542 DEBUG [RS:4;jenkins-hbase4:44647] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 15:16:57,542 DEBUG [RS:4;jenkins-hbase4:44647] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 15:16:57,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-13 15:16:57,542 DEBUG [RS:4;jenkins-hbase4:44647] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,542 DEBUG [RS:4;jenkins-hbase4:44647] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44647,1689261417244' 2023-07-13 15:16:57,542 DEBUG [RS:4;jenkins-hbase4:44647] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 15:16:57,542 DEBUG [RS:4;jenkins-hbase4:44647] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 15:16:57,542 DEBUG [RS:4;jenkins-hbase4:44647] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 15:16:57,542 INFO [RS:4;jenkins-hbase4:44647] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 15:16:57,542 INFO [RS:4;jenkins-hbase4:44647] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 15:16:57,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 15:16:57,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 15:16:57,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 15:16:57,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 15:16:57,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:57,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:57,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46327] to rsgroup master 2023-07-13 15:16:57,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 15:16:57,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] ipc.CallRunner(144): callId: 104 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:54730 deadline: 1689262617573, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. 2023-07-13 15:16:57,574 WARN [Listener at localhost/35161] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor64.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBasics.afterMethod(TestRSGroupsBasics.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46327 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 15:16:57,577 INFO [Listener at localhost/35161] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 15:16:57,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-13 15:16:57,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 15:16:57,578 INFO [Listener at localhost/35161] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35181, jenkins-hbase4.apache.org:35843, jenkins-hbase4.apache.org:40629, jenkins-hbase4.apache.org:44647], Tables:[hbase:meta, hbase:quota, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 15:16:57,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-13 15:16:57,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46327] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 15:16:57,600 INFO [Listener at localhost/35161] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsBasics#testClearDeadServers Thread=577 (was 552) - Thread LEAK? -, OpenFileDescriptor=897 (was 871) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=428 (was 448), ProcessCount=172 (was 172), AvailableMemoryMB=5716 (was 5795) 2023-07-13 15:16:57,600 WARN [Listener at localhost/35161] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-13 15:16:57,600 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 15:16:57,600 INFO [Listener at localhost/35161] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 15:16:57,601 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x66512a10 to 127.0.0.1:56695 2023-07-13 15:16:57,601 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,601 DEBUG [Listener at localhost/35161] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 15:16:57,601 DEBUG [Listener at localhost/35161] util.JVMClusterUtil(257): Found active master hash=1966975976, stopped=false 2023-07-13 15:16:57,601 DEBUG [Listener at localhost/35161] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 15:16:57,601 DEBUG [Listener at localhost/35161] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 15:16:57,601 INFO [Listener at localhost/35161] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:57,629 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:57,629 INFO [Listener at localhost/35161] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 15:16:57,629 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:57,629 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:57,631 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:57,629 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:57,629 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 15:16:57,631 DEBUG [Listener at localhost/35161] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2e3dd715 to 127.0.0.1:56695 2023-07-13 15:16:57,631 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:57,631 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:57,631 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:57,631 DEBUG [Listener at localhost/35161] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,631 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:57,632 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40629,1689261403333' ***** 2023-07-13 15:16:57,632 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:57,632 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35181,1689261403500' ***** 2023-07-13 15:16:57,632 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:57,632 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 15:16:57,632 INFO [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:57,632 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35843,1689261403676' ***** 2023-07-13 15:16:57,632 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:57,633 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:57,635 INFO [Listener at localhost/35161] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44647,1689261417244' ***** 2023-07-13 15:16:57,635 INFO [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:57,636 INFO [Listener at localhost/35161] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 15:16:57,638 INFO [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:57,640 INFO [RS:1;jenkins-hbase4:35181] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1a339e43{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:57,640 INFO [RS:1;jenkins-hbase4:35181] server.AbstractConnector(383): Stopped ServerConnector@4cb46a72{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:57,640 INFO [RS:0;jenkins-hbase4:40629] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@727136fa{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:57,641 INFO [RS:2;jenkins-hbase4:35843] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3e9b0004{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:57,641 INFO [RS:1;jenkins-hbase4:35181] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:57,641 INFO [RS:4;jenkins-hbase4:44647] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@601de5f1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 15:16:57,642 INFO [RS:0;jenkins-hbase4:40629] server.AbstractConnector(383): Stopped ServerConnector@6139a553{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:57,642 INFO [RS:2;jenkins-hbase4:35843] server.AbstractConnector(383): Stopped ServerConnector@4b5beed5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:57,642 INFO [RS:0;jenkins-hbase4:40629] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:57,642 INFO [RS:1;jenkins-hbase4:35181] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f0f0e67{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:57,642 INFO [RS:2;jenkins-hbase4:35843] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:57,642 INFO [RS:4;jenkins-hbase4:44647] server.AbstractConnector(383): Stopped ServerConnector@4ab1f51d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:57,644 INFO [RS:1;jenkins-hbase4:35181] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@307021bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:57,643 INFO [RS:0;jenkins-hbase4:40629] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4c13ea68{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:57,645 INFO [RS:2;jenkins-hbase4:35843] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@42c1a7eb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:57,645 INFO [RS:0;jenkins-hbase4:40629] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7c837354{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:57,644 INFO [RS:4;jenkins-hbase4:44647] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:57,646 INFO [RS:2;jenkins-hbase4:35843] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39f0e8d4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:57,646 INFO [RS:1;jenkins-hbase4:35181] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:57,646 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:57,646 INFO [RS:1;jenkins-hbase4:35181] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:57,647 INFO [RS:1;jenkins-hbase4:35181] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:57,647 INFO [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:57,647 DEBUG [RS:1;jenkins-hbase4:35181] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7de674e9 to 127.0.0.1:56695 2023-07-13 15:16:57,647 DEBUG [RS:1;jenkins-hbase4:35181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,647 INFO [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35181,1689261403500; all regions closed. 2023-07-13 15:16:57,648 INFO [RS:0;jenkins-hbase4:40629] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:57,648 INFO [RS:0;jenkins-hbase4:40629] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:57,648 INFO [RS:4;jenkins-hbase4:44647] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3b0e3a36{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:57,648 INFO [RS:2;jenkins-hbase4:35843] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:57,649 INFO [RS:4;jenkins-hbase4:44647] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1916b0e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:57,648 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:57,648 INFO [RS:0;jenkins-hbase4:40629] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:57,649 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:57,649 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(3305): Received CLOSE for 111352044b1bd403da18db964c499c82 2023-07-13 15:16:57,649 INFO [RS:2;jenkins-hbase4:35843] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:57,650 INFO [RS:2;jenkins-hbase4:35843] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:57,650 INFO [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:57,650 DEBUG [RS:2;jenkins-hbase4:35843] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x313585c4 to 127.0.0.1:56695 2023-07-13 15:16:57,650 DEBUG [RS:2;jenkins-hbase4:35843] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,650 INFO [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35843,1689261403676; all regions closed. 2023-07-13 15:16:57,650 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(3305): Received CLOSE for 6f63fe7474be7b61966d8c0a666e0157 2023-07-13 15:16:57,650 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(3305): Received CLOSE for 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:57,650 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:57,651 INFO [RS:4;jenkins-hbase4:44647] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 15:16:57,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 111352044b1bd403da18db964c499c82, disabling compactions & flushes 2023-07-13 15:16:57,651 INFO [RS:4;jenkins-hbase4:44647] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 15:16:57,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:57,651 DEBUG [RS:0;jenkins-hbase4:40629] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x38b5f8c1 to 127.0.0.1:56695 2023-07-13 15:16:57,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:57,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. after waiting 0 ms 2023-07-13 15:16:57,651 INFO [RS:4;jenkins-hbase4:44647] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 15:16:57,651 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 15:16:57,652 INFO [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:57,651 DEBUG [RS:0;jenkins-hbase4:40629] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,652 INFO [RS:0;jenkins-hbase4:40629] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:57,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 111352044b1bd403da18db964c499c82 1/1 column families, dataSize=2.05 KB heapSize=3.55 KB 2023-07-13 15:16:57,652 DEBUG [RS:4;jenkins-hbase4:44647] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1183eb58 to 127.0.0.1:56695 2023-07-13 15:16:57,652 DEBUG [RS:4;jenkins-hbase4:44647] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,652 INFO [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44647,1689261417244; all regions closed. 2023-07-13 15:16:57,652 DEBUG [RS:4;jenkins-hbase4:44647] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,652 INFO [RS:4;jenkins-hbase4:44647] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:57,652 INFO [RS:0;jenkins-hbase4:40629] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:57,652 INFO [RS:0;jenkins-hbase4:40629] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:57,652 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 15:16:57,653 INFO [RS:4;jenkins-hbase4:44647] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:57,655 INFO [RS:4;jenkins-hbase4:44647] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:57,655 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 15:16:57,654 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-13 15:16:57,656 DEBUG [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1478): Online Regions={111352044b1bd403da18db964c499c82=hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82., 6f63fe7474be7b61966d8c0a666e0157=hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157., 8f9b3c3c0c701a7e057738cfe2a31027=hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027., 1588230740=hbase:meta,,1.1588230740} 2023-07-13 15:16:57,656 DEBUG [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1504): Waiting on 111352044b1bd403da18db964c499c82, 1588230740, 6f63fe7474be7b61966d8c0a666e0157, 8f9b3c3c0c701a7e057738cfe2a31027 2023-07-13 15:16:57,656 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 15:16:57,655 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:57,655 INFO [RS:4;jenkins-hbase4:44647] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:57,657 INFO [RS:4;jenkins-hbase4:44647] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:57,656 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 15:16:57,658 INFO [RS:4;jenkins-hbase4:44647] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44647 2023-07-13 15:16:57,658 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 15:16:57,658 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 15:16:57,659 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.79 KB heapSize=6.97 KB 2023-07-13 15:16:57,661 DEBUG [RS:1;jenkins-hbase4:35181] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:57,661 INFO [RS:1;jenkins-hbase4:35181] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35181%2C1689261403500.meta:.meta(num 1689261404479) 2023-07-13 15:16:57,670 DEBUG [RS:2;jenkins-hbase4:35843] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:57,670 INFO [RS:2;jenkins-hbase4:35843] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35843%2C1689261403676:(num 1689261404433) 2023-07-13 15:16:57,670 DEBUG [RS:2;jenkins-hbase4:35843] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,670 INFO [RS:2;jenkins-hbase4:35843] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:57,672 INFO [RS:2;jenkins-hbase4:35843] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:57,672 INFO [RS:2;jenkins-hbase4:35843] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:57,672 INFO [RS:2;jenkins-hbase4:35843] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:57,672 INFO [RS:2;jenkins-hbase4:35843] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:57,672 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:57,674 INFO [RS:2;jenkins-hbase4:35843] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35843 2023-07-13 15:16:57,678 DEBUG [RS:1;jenkins-hbase4:35181] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:57,678 INFO [RS:1;jenkins-hbase4:35181] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35181%2C1689261403500:(num 1689261404436) 2023-07-13 15:16:57,678 DEBUG [RS:1;jenkins-hbase4:35181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,678 INFO [RS:1;jenkins-hbase4:35181] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:57,678 INFO [RS:1;jenkins-hbase4:35181] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:57,678 INFO [RS:1;jenkins-hbase4:35181] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 15:16:57,678 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:57,678 INFO [RS:1;jenkins-hbase4:35181] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 15:16:57,678 INFO [RS:1;jenkins-hbase4:35181] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 15:16:57,679 INFO [RS:1;jenkins-hbase4:35181] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35181 2023-07-13 15:16:57,683 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,683 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,683 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,683 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,683 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,683 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,683 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44647,1689261417244 2023-07-13 15:16:57,683 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,683 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,696 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:57,701 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.79 KB at sequenceid=195 (bloomFilter=false), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/883b7be536924ad1976fd90070d23d75 2023-07-13 15:16:57,702 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.05 KB at sequenceid=108 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/cb358f50c73d40ea926376caed44b5f1 2023-07-13 15:16:57,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cb358f50c73d40ea926376caed44b5f1 2023-07-13 15:16:57,707 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/.tmp/info/883b7be536924ad1976fd90070d23d75 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/883b7be536924ad1976fd90070d23d75 2023-07-13 15:16:57,708 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/.tmp/m/cb358f50c73d40ea926376caed44b5f1 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/cb358f50c73d40ea926376caed44b5f1 2023-07-13 15:16:57,717 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:57,721 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/info/883b7be536924ad1976fd90070d23d75, entries=31, sequenceid=195, filesize=8.3 K 2023-07-13 15:16:57,721 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cb358f50c73d40ea926376caed44b5f1 2023-07-13 15:16:57,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/m/cb358f50c73d40ea926376caed44b5f1, entries=4, sequenceid=108, filesize=5.3 K 2023-07-13 15:16:57,722 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.79 KB/3876, heapSize ~6.45 KB/6608, currentSize=0 B/0 for 1588230740 in 63ms, sequenceid=195, compaction requested=true 2023-07-13 15:16:57,722 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 15:16:57,722 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.05 KB/2100, heapSize ~3.54 KB/3624, currentSize=0 B/0 for 111352044b1bd403da18db964c499c82 in 70ms, sequenceid=108, compaction requested=true 2023-07-13 15:16:57,722 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 15:16:57,726 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:57,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/rsgroup/111352044b1bd403da18db964c499c82/recovered.edits/111.seqid, newMaxSeqId=111, maxSeqId=98 2023-07-13 15:16:57,729 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/meta/1588230740/recovered.edits/198.seqid, newMaxSeqId=198, maxSeqId=182 2023-07-13 15:16:57,729 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:57,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:57,729 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:57,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 111352044b1bd403da18db964c499c82: 2023-07-13 15:16:57,730 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 15:16:57,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689261370837.111352044b1bd403da18db964c499c82. 2023-07-13 15:16:57,730 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:57,730 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 15:16:57,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6f63fe7474be7b61966d8c0a666e0157, disabling compactions & flushes 2023-07-13 15:16:57,730 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 15:16:57,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:57,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:57,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. after waiting 0 ms 2023-07-13 15:16:57,731 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:57,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/quota/6f63fe7474be7b61966d8c0a666e0157/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 15:16:57,741 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:57,741 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6f63fe7474be7b61966d8c0a666e0157: 2023-07-13 15:16:57,741 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689261397080.6f63fe7474be7b61966d8c0a666e0157. 2023-07-13 15:16:57,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8f9b3c3c0c701a7e057738cfe2a31027, disabling compactions & flushes 2023-07-13 15:16:57,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:57,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:57,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. after waiting 0 ms 2023-07-13 15:16:57,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:57,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/data/hbase/namespace/8f9b3c3c0c701a7e057738cfe2a31027/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=26 2023-07-13 15:16:57,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:57,745 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8f9b3c3c0c701a7e057738cfe2a31027: 2023-07-13 15:16:57,746 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689261370677.8f9b3c3c0c701a7e057738cfe2a31027. 2023-07-13 15:16:57,773 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44647,1689261417244] 2023-07-13 15:16:57,773 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44647,1689261417244; numProcessing=1 2023-07-13 15:16:57,783 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:57,783 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:57,783 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:57,783 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:57,783 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35181,1689261403500 2023-07-13 15:16:57,783 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,783 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:57,783 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35843,1689261403676 2023-07-13 15:16:57,798 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44647,1689261417244 already deleted, retry=false 2023-07-13 15:16:57,798 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44647,1689261417244 expired; onlineServers=3 2023-07-13 15:16:57,822 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35181,1689261403500] 2023-07-13 15:16:57,822 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35181,1689261403500; numProcessing=2 2023-07-13 15:16:57,837 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35181,1689261403500 already deleted, retry=false 2023-07-13 15:16:57,837 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35181,1689261403500 expired; onlineServers=2 2023-07-13 15:16:57,837 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35843,1689261403676] 2023-07-13 15:16:57,837 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35843,1689261403676; numProcessing=3 2023-07-13 15:16:57,856 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40629,1689261403333; all regions closed. 2023-07-13 15:16:57,860 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35843,1689261403676 already deleted, retry=false 2023-07-13 15:16:57,860 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35843,1689261403676 expired; onlineServers=1 2023-07-13 15:16:57,862 DEBUG [RS:0;jenkins-hbase4:40629] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:57,862 INFO [RS:0;jenkins-hbase4:40629] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40629%2C1689261403333.meta:.meta(num 1689261414308) 2023-07-13 15:16:57,868 DEBUG [RS:0;jenkins-hbase4:40629] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/oldWALs 2023-07-13 15:16:57,868 INFO [RS:0;jenkins-hbase4:40629] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40629%2C1689261403333:(num 1689261404433) 2023-07-13 15:16:57,868 DEBUG [RS:0;jenkins-hbase4:40629] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,868 INFO [RS:0;jenkins-hbase4:40629] regionserver.LeaseManager(133): Closed leases 2023-07-13 15:16:57,869 INFO [RS:0;jenkins-hbase4:40629] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 15:16:57,869 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:57,870 INFO [RS:0;jenkins-hbase4:40629] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40629 2023-07-13 15:16:57,883 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40629,1689261403333 2023-07-13 15:16:57,883 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 15:16:57,883 ERROR [Listener at localhost/35161-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3d66345d rejected from java.util.concurrent.ThreadPoolExecutor@3959b729[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 12] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-13 15:16:57,906 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40629,1689261403333] 2023-07-13 15:16:57,906 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40629,1689261403333; numProcessing=4 2023-07-13 15:16:57,921 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40629,1689261403333 already deleted, retry=false 2023-07-13 15:16:57,921 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40629,1689261403333 expired; onlineServers=0 2023-07-13 15:16:57,921 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46327,1689261403181' ***** 2023-07-13 15:16:57,921 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 15:16:57,922 DEBUG [M:0;jenkins-hbase4:46327] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71023197, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-13 15:16:57,922 INFO [M:0;jenkins-hbase4:46327] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 15:16:57,925 INFO [M:0;jenkins-hbase4:46327] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3131b3ff{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 15:16:57,925 INFO [M:0;jenkins-hbase4:46327] server.AbstractConnector(383): Stopped ServerConnector@3a309945{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:57,926 INFO [M:0;jenkins-hbase4:46327] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 15:16:57,926 INFO [M:0;jenkins-hbase4:46327] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@31f6b9dd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 15:16:57,927 INFO [M:0;jenkins-hbase4:46327] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@279c5ff3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/hadoop.log.dir/,STOPPED} 2023-07-13 15:16:57,927 INFO [M:0;jenkins-hbase4:46327] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46327,1689261403181 2023-07-13 15:16:57,927 INFO [M:0;jenkins-hbase4:46327] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46327,1689261403181; all regions closed. 2023-07-13 15:16:57,927 DEBUG [M:0;jenkins-hbase4:46327] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 15:16:57,927 INFO [M:0;jenkins-hbase4:46327] master.HMaster(1491): Stopping master jetty server 2023-07-13 15:16:57,928 INFO [M:0;jenkins-hbase4:46327] server.AbstractConnector(383): Stopped ServerConnector@38cd7617{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 15:16:57,928 DEBUG [M:0;jenkins-hbase4:46327] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 15:16:57,929 DEBUG [M:0;jenkins-hbase4:46327] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 15:16:57,929 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 15:16:57,929 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261404125] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689261404125,5,FailOnTimeoutGroup] 2023-07-13 15:16:57,929 INFO [M:0;jenkins-hbase4:46327] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 15:16:57,929 INFO [M:0;jenkins-hbase4:46327] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 15:16:57,929 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261404119] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689261404119,5,FailOnTimeoutGroup] 2023-07-13 15:16:57,929 INFO [M:0;jenkins-hbase4:46327] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-13 15:16:57,929 DEBUG [M:0;jenkins-hbase4:46327] master.HMaster(1512): Stopping service threads 2023-07-13 15:16:57,929 INFO [M:0;jenkins-hbase4:46327] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 15:16:57,929 ERROR [M:0;jenkins-hbase4:46327] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-13 15:16:57,929 INFO [M:0;jenkins-hbase4:46327] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 15:16:57,929 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 15:16:57,930 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 15:16:57,930 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 15:16:57,930 DEBUG [M:0;jenkins-hbase4:46327] zookeeper.ZKUtil(398): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 15:16:57,930 WARN [M:0;jenkins-hbase4:46327] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 15:16:57,930 INFO [M:0;jenkins-hbase4:46327] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 15:16:57,930 INFO [M:0;jenkins-hbase4:46327] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 15:16:57,930 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 15:16:57,931 DEBUG [M:0;jenkins-hbase4:46327] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 15:16:57,931 INFO [M:0;jenkins-hbase4:46327] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:57,931 DEBUG [M:0;jenkins-hbase4:46327] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:57,931 DEBUG [M:0;jenkins-hbase4:46327] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 15:16:57,931 DEBUG [M:0;jenkins-hbase4:46327] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:57,931 INFO [M:0;jenkins-hbase4:46327] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=73.82 KB heapSize=90.72 KB 2023-07-13 15:16:57,946 INFO [M:0;jenkins-hbase4:46327] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=73.82 KB at sequenceid=1155 (bloomFilter=true), to=hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6c88764f9fe147f693165727b4d3a0e3 2023-07-13 15:16:57,952 DEBUG [M:0;jenkins-hbase4:46327] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6c88764f9fe147f693165727b4d3a0e3 as hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6c88764f9fe147f693165727b4d3a0e3 2023-07-13 15:16:57,957 INFO [M:0;jenkins-hbase4:46327] regionserver.HStore(1080): Added hdfs://localhost:36199/user/jenkins/test-data/1980fac7-2914-5f5f-05c5-4332750ea046/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6c88764f9fe147f693165727b4d3a0e3, entries=24, sequenceid=1155, filesize=8.3 K 2023-07-13 15:16:57,958 INFO [M:0;jenkins-hbase4:46327] regionserver.HRegion(2948): Finished flush of dataSize ~73.82 KB/75595, heapSize ~90.70 KB/92880, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=1155, compaction requested=true 2023-07-13 15:16:57,958 INFO [M:0;jenkins-hbase4:46327] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 15:16:57,958 DEBUG [M:0;jenkins-hbase4:46327] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 15:16:57,971 INFO [M:0;jenkins-hbase4:46327] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 15:16:57,971 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 15:16:57,971 INFO [M:0;jenkins-hbase4:46327] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46327 2023-07-13 15:16:57,983 DEBUG [M:0;jenkins-hbase4:46327] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46327,1689261403181 already deleted, retry=false 2023-07-13 15:16:58,215 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,215 INFO [M:0;jenkins-hbase4:46327] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46327,1689261403181; zookeeper connection closed. 2023-07-13 15:16:58,215 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): master:46327-0x1015f415947001c, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,316 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,316 INFO [RS:0;jenkins-hbase4:40629] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40629,1689261403333; zookeeper connection closed. 2023-07-13 15:16:58,316 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:40629-0x1015f415947001d, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,316 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6ed31046] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6ed31046 2023-07-13 15:16:58,416 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,416 INFO [RS:1;jenkins-hbase4:35181] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35181,1689261403500; zookeeper connection closed. 2023-07-13 15:16:58,416 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35181-0x1015f415947001e, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,416 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2e1a7ae4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2e1a7ae4 2023-07-13 15:16:58,516 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,516 INFO [RS:2;jenkins-hbase4:35843] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35843,1689261403676; zookeeper connection closed. 2023-07-13 15:16:58,516 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:35843-0x1015f415947001f, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,516 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2329bedb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2329bedb 2023-07-13 15:16:58,616 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,616 INFO [RS:4;jenkins-hbase4:44647] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44647,1689261417244; zookeeper connection closed. 2023-07-13 15:16:58,616 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): regionserver:44647-0x1015f415947002a, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 15:16:58,617 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@34e9c16a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@34e9c16a 2023-07-13 15:16:58,617 INFO [Listener at localhost/35161] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 5 regionserver(s) complete 2023-07-13 15:16:58,617 WARN [Listener at localhost/35161] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:58,640 INFO [Listener at localhost/35161] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:58,745 WARN [BP-899266584-172.31.14.131-1689261362039 heartbeating to localhost/127.0.0.1:36199] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:58,745 WARN [BP-899266584-172.31.14.131-1689261362039 heartbeating to localhost/127.0.0.1:36199] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-899266584-172.31.14.131-1689261362039 (Datanode Uuid b7a0296c-a721-427d-a6be-dd83259197a1) service to localhost/127.0.0.1:36199 2023-07-13 15:16:58,747 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data5/current/BP-899266584-172.31.14.131-1689261362039] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:58,747 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data6/current/BP-899266584-172.31.14.131-1689261362039] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:58,749 WARN [Listener at localhost/35161] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:58,770 INFO [Listener at localhost/35161] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:58,874 WARN [BP-899266584-172.31.14.131-1689261362039 heartbeating to localhost/127.0.0.1:36199] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:58,875 WARN [BP-899266584-172.31.14.131-1689261362039 heartbeating to localhost/127.0.0.1:36199] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-899266584-172.31.14.131-1689261362039 (Datanode Uuid 9c2b5109-f291-4013-9d71-f50c6e2558e9) service to localhost/127.0.0.1:36199 2023-07-13 15:16:58,875 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data3/current/BP-899266584-172.31.14.131-1689261362039] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:58,876 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data4/current/BP-899266584-172.31.14.131-1689261362039] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:58,877 WARN [Listener at localhost/35161] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 15:16:58,879 INFO [Listener at localhost/35161] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:58,982 WARN [BP-899266584-172.31.14.131-1689261362039 heartbeating to localhost/127.0.0.1:36199] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 15:16:58,982 WARN [BP-899266584-172.31.14.131-1689261362039 heartbeating to localhost/127.0.0.1:36199] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-899266584-172.31.14.131-1689261362039 (Datanode Uuid 4dd20a9a-0b40-4e00-95d7-124a813012b0) service to localhost/127.0.0.1:36199 2023-07-13 15:16:58,983 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data1/current/BP-899266584-172.31.14.131-1689261362039] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:58,983 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/213e4eb7-16ae-c62e-6817-d56946d3721b/cluster_26a47582-e617-1bef-28fe-2bab39110f3b/dfs/data/data2/current/BP-899266584-172.31.14.131-1689261362039] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 15:16:59,009 INFO [Listener at localhost/35161] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 15:16:59,135 INFO [Listener at localhost/35161] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 15:16:59,230 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015f415947001b, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-13 15:16:59,230 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015f415947000a, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-13 15:16:59,230 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015f415947001b, quorum=127.0.0.1:56695, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-13 15:16:59,230 DEBUG [Listener at localhost/35161-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1015f4159470027, quorum=127.0.0.1:56695, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-13 15:16:59,230 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015f415947000a, quorum=127.0.0.1:56695, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-13 15:16:59,231 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1015f4159470027, quorum=127.0.0.1:56695, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-13 15:16:59,268 INFO [Listener at localhost/35161] hbase.HBaseTestingUtility(1293): Minicluster is down